diff options
502 files changed, 10701 insertions, 4675 deletions
@@ -130,6 +130,7 @@ Domen Puncer <[email protected]> Douglas Gilbert <[email protected]> Ed L. Cashin <[email protected]> Erik Kaneda <[email protected]> <[email protected]> +Eugen Hristev <[email protected]> <[email protected]> Evgeniy Polyakov <[email protected]> Ezequiel Garcia <[email protected]> <[email protected]> Felipe W Damasio <[email protected]> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index c8ae7c897f14..74cec76be9f2 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1245,13 +1245,17 @@ PAGE_SIZE multiple when read back. This is a simple interface to trigger memory reclaim in the target cgroup. - This file accepts a string which contains the number of bytes to - reclaim. + This file accepts a single key, the number of bytes to reclaim. + No nested keys are currently supported. Example:: echo "1G" > memory.reclaim + The interface can be later extended with nested keys to + configure the reclaim behavior. For example, specify the + type of memory to reclaim from (anon, file, ..). + Please note that the kernel can over or under reclaim from the target cgroup. If less bytes are reclaimed than the specified amount, -EAGAIN is returned. @@ -1263,13 +1267,6 @@ PAGE_SIZE multiple when read back. This means that the networking layer will not adapt based on reclaim induced by memory.reclaim. - This file also allows the user to specify the nodes to reclaim from, - via the 'nodes=' key, for example:: - - echo "1G nodes=0,1" > memory.reclaim - - The above instructs the kernel to reclaim memory from nodes 0,1. - memory.peak A read-only single value file which exists on non-root cgroups. diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst index 6394f5dc2303..466c560b0c30 100644 --- a/Documentation/admin-guide/sysctl/net.rst +++ b/Documentation/admin-guide/sysctl/net.rst @@ -215,6 +215,12 @@ rmem_max The maximum receive socket buffer size in bytes. +rps_default_mask +---------------- + +The default RPS CPU mask used on newly created network devices. An empty +mask means RPS disabled by default. + tstamp_allow_data ----------------- Allow processes to receive tx timestamps looped together with the original diff --git a/Documentation/bpf/bpf_design_QA.rst b/Documentation/bpf/bpf_design_QA.rst index cec2371173d7..bfff0e7e37c2 100644 --- a/Documentation/bpf/bpf_design_QA.rst +++ b/Documentation/bpf/bpf_design_QA.rst @@ -208,6 +208,10 @@ data structures and compile with kernel internal headers. Both of these kernel internals are subject to change and can break with newer kernels such that the program needs to be adapted accordingly. +New BPF functionality is generally added through the use of kfuncs instead of +new helpers. Kfuncs are not considered part of the stable API, and have their own +lifecycle expectations as described in :ref:`BPF_kfunc_lifecycle_expectations`. + Q: Are tracepoints part of the stable ABI? ------------------------------------------ A: NO. Tracepoints are tied to internal implementation details hence they are @@ -236,8 +240,8 @@ A: NO. Classic BPF programs are converted into extend BPF instructions. Q: Can BPF call arbitrary kernel functions? ------------------------------------------- -A: NO. BPF programs can only call a set of helper functions which -is defined for every program type. +A: NO. BPF programs can only call specific functions exposed as BPF helpers or +kfuncs. The set of available functions is defined for every program type. Q: Can BPF overwrite arbitrary kernel memory? --------------------------------------------- @@ -263,7 +267,12 @@ Q: New functionality via kernel modules? Q: Can BPF functionality such as new program or map types, new helpers, etc be added out of kernel module code? -A: NO. +A: Yes, through kfuncs and kptrs + +The core BPF functionality such as program types, maps and helpers cannot be +added to by modules. However, modules can expose functionality to BPF programs +by exporting kfuncs (which may return pointers to module-internal data +structures as kptrs). Q: Directly calling kernel function is an ABI? ---------------------------------------------- @@ -278,7 +287,8 @@ kernel functions have already been used by other kernel tcp cc (congestion-control) implementations. If any of these kernel functions has changed, both the in-tree and out-of-tree kernel tcp cc implementations have to be changed. The same goes for the bpf -programs and they have to be adjusted accordingly. +programs and they have to be adjusted accordingly. See +:ref:`BPF_kfunc_lifecycle_expectations` for details. Q: Attaching to arbitrary kernel functions is an ABI? ----------------------------------------------------- @@ -340,6 +350,7 @@ compatibility for these features? A: NO. -Unlike map value types, there are no stability guarantees for this case. The -whole API to work with allocated objects and any support for special fields -inside them is unstable (since it is exposed through kfuncs). +Unlike map value types, the API to work with allocated objects and any support +for special fields inside them is exposed through kfuncs, and thus has the same +lifecycle expectations as the kfuncs themselves. See +:ref:`BPF_kfunc_lifecycle_expectations` for details. diff --git a/Documentation/bpf/instruction-set.rst b/Documentation/bpf/instruction-set.rst index 2d3fe59bd260..af515de5fc38 100644 --- a/Documentation/bpf/instruction-set.rst +++ b/Documentation/bpf/instruction-set.rst @@ -7,6 +7,11 @@ eBPF Instruction Set Specification, v1.0 This document specifies version 1.0 of the eBPF instruction set. +Documentation conventions +========================= + +For brevity, this document uses the type notion "u64", "u32", etc. +to mean an unsigned integer whose width is the specified number of bits. Registers and calling convention ================================ @@ -30,20 +35,56 @@ Instruction encoding eBPF has two instruction encodings: * the basic instruction encoding, which uses 64 bits to encode an instruction -* the wide instruction encoding, which appends a second 64-bit immediate value - (imm64) after the basic instruction for a total of 128 bits. +* the wide instruction encoding, which appends a second 64-bit immediate (i.e., + constant) value after the basic instruction for a total of 128 bits. + +The basic instruction encoding is as follows, where MSB and LSB mean the most significant +bits and least significant bits, respectively: + +============= ======= ======= ======= ============ +32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB) +============= ======= ======= ======= ============ +imm offset src_reg dst_reg opcode +============= ======= ======= ======= ============ + +**imm** + signed integer immediate value -The basic instruction encoding looks as follows: +**offset** + signed integer offset used with pointer arithmetic -============= ======= =============== ==================== ============ -32 bits (MSB) 16 bits 4 bits 4 bits 8 bits (LSB) -============= ======= =============== ==================== ============ -immediate offset source register destination register opcode -============= ======= =============== ==================== ============ +**src_reg** + the source register number (0-10), except where otherwise specified + (`64-bit immediate instructions`_ reuse this field for other purposes) + +**dst_reg** + destination register number (0-10) + +**opcode** + operation to perform Note that most instructions do not use all of the fields. Unused fields shall be cleared to zero. +As discussed below in `64-bit immediate instructions`_, a 64-bit immediate +instruction uses a 64-bit immediate value that is constructed as follows. +The 64 bits following the basic instruction contain a pseudo instruction +using the same format but with opcode, dst_reg, src_reg, and offset all set to zero, +and imm containing the high 32 bits of the immediate value. + +================= ================== +64 bits (MSB) 64 bits (LSB) +================= ================== +basic instruction pseudo instruction +================= ================== + +Thus the 64-bit immediate value is constructed as follows: + + imm64 = (next_imm << 32) | imm + +where 'next_imm' refers to the imm value of the pseudo instruction +following the basic instruction. + Instruction classes ------------------- @@ -71,27 +112,32 @@ For arithmetic and jump instructions (``BPF_ALU``, ``BPF_ALU64``, ``BPF_JMP`` an ============== ====== ================= 4 bits (MSB) 1 bit 3 bits (LSB) ============== ====== ================= -operation code source instruction class +code source instruction class ============== ====== ================= -The 4th bit encodes the source operand: +**code** + the operation code, whose meaning varies by instruction class - ====== ===== ======================================== - source value description - ====== ===== ======================================== - BPF_K 0x00 use 32-bit immediate as source operand - BPF_X 0x08 use 'src_reg' register as source operand - ====== ===== ======================================== +**source** + the source operand location, which unless otherwise specified is one of: -The four MSB bits store the operation code. + ====== ===== ============================================== + source value description + ====== ===== ============================================== + BPF_K 0x00 use 32-bit 'imm' value as source operand + BPF_X 0x08 use 'src_reg' register value as source operand + ====== ===== ============================================== +**instruction class** + the instruction class (see `Instruction classes`_) Arithmetic instructions ----------------------- ``BPF_ALU`` uses 32-bit wide operands while ``BPF_ALU64`` uses 64-bit wide operands for otherwise identical operations. -The 'code' field encodes the operation as below: +The 'code' field encodes the operation as below, where 'src' and 'dst' refer +to the values of the source and destination registers, respectively. ======== ===== ========================================================== code value description @@ -121,19 +167,21 @@ the destination register is unchanged whereas for ``BPF_ALU`` the upper ``BPF_ADD | BPF_X | BPF_ALU`` means:: - dst_reg = (u32) dst_reg + (u32) src_reg; + dst = (u32) ((u32) dst + (u32) src) + +where '(u32)' indicates that the upper 32 bits are zeroed. ``BPF_ADD | BPF_X | BPF_ALU64`` means:: - dst_reg = dst_reg + src_reg + dst = dst + src ``BPF_XOR | BPF_K | BPF_ALU`` means:: - dst_reg = (u32) dst_reg ^ (u32) imm32 + dst = (u32) dst ^ (u32) imm32 ``BPF_XOR | BPF_K | BPF_ALU64`` means:: - dst_reg = dst_reg ^ imm32 + dst = dst ^ imm32 Also note that the division and modulo operations are unsigned. Thus, for ``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas @@ -167,11 +215,11 @@ Examples: ``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means:: - dst_reg = htole16(dst_reg) + dst = htole16(dst) ``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means:: - dst_reg = htobe64(dst_reg) + dst = htobe64(dst) Jump instructions ----------------- @@ -246,15 +294,15 @@ instructions that transfer data between a register and memory. ``BPF_MEM | <size> | BPF_STX`` means:: - *(size *) (dst_reg + off) = src_reg + *(size *) (dst + offset) = src ``BPF_MEM | <size> | BPF_ST`` means:: - *(size *) (dst_reg + off) = imm32 + *(size *) (dst + offset) = imm32 ``BPF_MEM | <size> | BPF_LDX`` means:: - dst_reg = *(size *) (src_reg + off) + dst = *(size *) (src + offset) Where size is one of: ``BPF_B``, ``BPF_H``, ``BPF_W``, or ``BPF_DW``. @@ -288,11 +336,11 @@ BPF_XOR 0xa0 atomic xor ``BPF_ATOMIC | BPF_W | BPF_STX`` with 'imm' = BPF_ADD means:: - *(u32 *)(dst_reg + off16) += src_reg + *(u32 *)(dst + offset) += src ``BPF_ATOMIC | BPF_DW | BPF_STX`` with 'imm' = BPF ADD means:: - *(u64 *)(dst_reg + off16) += src_reg + *(u64 *)(dst + offset) += src In addition to the simple atomic operations, there also is a modifier and two complex atomic operations: @@ -307,16 +355,16 @@ BPF_CMPXCHG 0xf0 | BPF_FETCH atomic compare and exchange The ``BPF_FETCH`` modifier is optional for simple atomic operations, and always set for the complex atomic operations. If the ``BPF_FETCH`` flag -is set, then the operation also overwrites ``src_reg`` with the value that +is set, then the operation also overwrites ``src`` with the value that was in memory before it was modified. -The ``BPF_XCHG`` operation atomically exchanges ``src_reg`` with the value -addressed by ``dst_reg + off``. +The ``BPF_XCHG`` operation atomically exchanges ``src`` with the value +addressed by ``dst + offset``. The ``BPF_CMPXCHG`` operation atomically compares the value addressed by -``dst_reg + off`` with ``R0``. If they match, the value addressed by -``dst_reg + off`` is replaced with ``src_reg``. In either case, the -value that was at ``dst_reg + off`` before the operation is zero-extended +``dst + offset`` with ``R0``. If they match, the value addressed by +``dst + offset`` is replaced with ``src``. In either case, the +value that was at ``dst + offset`` before the operation is zero-extended and loaded back to ``R0``. 64-bit immediate instructions @@ -329,7 +377,7 @@ There is currently only one such instruction. ``BPF_LD | BPF_DW | BPF_IMM`` means:: - dst_reg = imm64 + dst = imm64 Legacy BPF Packet access instructions diff --git a/Documentation/bpf/kfuncs.rst b/Documentation/bpf/kfuncs.rst index 1a683225d080..ca96ef3f6896 100644 --- a/Documentation/bpf/kfuncs.rst +++ b/Documentation/bpf/kfuncs.rst @@ -13,7 +13,7 @@ BPF Kernel Functions or more commonly known as kfuncs are functions in the Linux kernel which are exposed for use by BPF programs. Unlike normal BPF helpers, kfuncs do not have a stable interface and can change from one kernel release to another. Hence, BPF programs need to be updated in response to changes in the -kernel. +kernel. See :ref:`BPF_kfunc_lifecycle_expectations` for more information. 2. Defining a kfunc =================== @@ -41,7 +41,7 @@ An example is given below:: __diag_ignore_all("-Wmissing-prototypes", "Global kfuncs as their definitions will be in BTF"); - struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) + __bpf_kfunc struct task_struct *bpf_find_get_task_by_vpid(pid_t nr) { return find_get_task_by_vpid(nr); } @@ -66,7 +66,7 @@ kfunc with a __tag, where tag may be one of the supported annotations. This annotation is used to indicate a memory and size pair in the argument list. An example is given below:: - void bpf_memzero(void *mem, int mem__sz) + __bpf_kfunc void bpf_memzero(void *mem, int mem__sz) { ... } @@ -86,7 +86,7 @@ safety of the program. An example is given below:: - void *bpf_obj_new(u32 local_type_id__k, ...) + __bpf_kfunc void *bpf_obj_new(u32 local_type_id__k, ...) { ... } @@ -125,6 +125,20 @@ flags on a set of kfuncs as follows:: This set encodes the BTF ID of each kfunc listed above, and encodes the flags along with it. Ofcourse, it is also allowed to specify no flags. +kfunc definitions should also always be annotated with the ``__bpf_kfunc`` +macro. This prevents issues such as the compiler inlining the kfunc if it's a +static kernel function, or the function being elided in an LTO build as it's +not used in the rest of the kernel. Developers should not manually add +annotations to their kfunc to prevent these issues. If an annotation is +required to prevent such an issue with your kfunc, it is a bug and should be +added to the definition of the macro so that other kfuncs are similarly +protected. An example is given below:: + + __bpf_kfunc struct task_struct *bpf_get_task_pid(s32 pid) + { + ... + } + 2.4.1 KF_ACQUIRE flag --------------------- @@ -224,6 +238,28 @@ single argument which must be a trusted argument or a MEM_RCU pointer. The argument may have reference count of 0 and the kfunc must take this into consideration. +.. _KF_deprecated_flag: + +2.4.9 KF_DEPRECATED flag +------------------------ + +The KF_DEPRECATED flag is used for kfuncs which are scheduled to be +changed or removed in a subsequent kernel release. A kfunc that is +marked with KF_DEPRECATED should also have any relevant information +captured in its kernel doc. Such information typically includes the +kfunc's expected remaining lifespan, a recommendation for new +functionality that can replace it if any is available, and possibly a +rationale for why it is being removed. + +Note that while on some occasions, a KF_DEPRECATED kfunc may continue to be +supported and have its KF_DEPRECATED flag removed, it is likely to be far more +difficult to remove a KF_DEPRECATED flag after it's been added than it is to +prevent it from being added in the first place. As described in +:ref:`BPF_kfunc_lifecycle_expectations`, users that rely on specific kfuncs are +encouraged to make their use-cases known as early as possible, and participate +in upstream discussions regarding whether to keep, change, deprecate, or remove +those kfuncs if and when such discussions occur. + 2.5 Registering the kfuncs -------------------------- @@ -290,14 +326,107 @@ In order to accommodate such requirements, the verifier will enforce strict PTR_TO_BTF_ID type matching if two types have the exact same name, with one being suffixed with ``___init``. -3. Core kfuncs +.. _BPF_kfunc_lifecycle_expectations: + +3. kfunc lifecycle expectations +=============================== + +kfuncs provide a kernel <-> kernel API, and thus are not bound by any of the +strict stability restrictions associated with kernel <-> user UAPIs. This means +they can be thought of as similar to EXPORT_SYMBOL_GPL, and can therefore be +modified or removed by a maintainer of the subsystem they're defined in when +it's deemed necessary. + +Like any other change to the kernel, maintainers will not change or remove a +kfunc without having a reasonable justification. Whether or not they'll choose +to change a kfunc will ultimately depend on a variety of factors, such as how +widely used the kfunc is, how long the kfunc has been in the kernel, whether an +alternative kfunc exists, what the norm is in terms of stability for the +subsystem in question, and of course what the technical cost is of continuing +to support the kfunc. + +There are several implications of this: + +a) kfuncs that are widely used or have been in the kernel for a long time will + be more difficult to justify being changed or removed by a maintainer. In + other words, kfuncs that are known to have a lot of users and provide + significant value provide stronger incentives for maintainers to invest the + time and complexity in supporting them. It is therefore important for + developers that are using kfuncs in their BPF programs to communicate and + explain how and why those kfuncs are being used, and to participate in + discussions regarding those kfuncs when they occur upstream. + +b) Unlike regular kernel symbols marked with EXPORT_SYMBOL_GPL, BPF programs + that call kfuncs are generally not part of the kernel tree. This means that + refactoring cannot typically change callers in-place when a kfunc changes, + as is done for e.g. an upstreamed driver being updated in place when a + kernel symbol is changed. + + Unlike with regular kernel symbols, this is expected behavior for BPF + symbols, and out-of-tree BPF programs that use kfuncs should be considered + relevant to discussions and decisions around modifying and removing those + kfuncs. The BPF community will take an active role in participating in + upstream discussions when necessary to ensure that the perspectives of such + users are taken into account. + +c) A kfunc will never have any hard stability guarantees. BPF APIs cannot and + will not ever hard-block a change in the kernel purely for stability + reasons. That being said, kfuncs are features that are meant to solve + problems and provide value to users. The decision of whether to change or + remove a kfunc is a multivariate technical decision that is made on a + case-by-case basis, and which is informed by data points such as those + mentioned above. It is expected that a kfunc being removed or changed with + no warning will not be a common occurrence or take place without sound + justification, but it is a possibility that must be accepted if one is to + use kfuncs. + +3.1 kfunc deprecation +--------------------- + +As described above, while sometimes a maintainer may find that a kfunc must be +changed or removed immediately to accommodate some changes in their subsystem, +usually kfuncs will be able to accommodate a longer and more measured +deprecation process. For example, if a new kfunc comes along which provides +superior functionality to an existing kfunc, the existing kfunc may be +deprecated for some period of time to allow users to migrate their BPF programs +to use the new one. Or, if a kfunc has no known users, a decision may be made +to remove the kfunc (without providing an alternative API) after some +deprecation period so as to provide users with a window to notify the kfunc +maintainer if it turns out that the kfunc is actually being used. + +It's expected that the common case will be that kfuncs will go through a +deprecation period rather than being changed or removed without warning. As +described in :ref:`KF_deprecated_flag`, the kfunc framework provides the +KF_DEPRECATED flag to kfunc developers to signal to users that a kfunc has been +deprecated. Once a kfunc has been marked with KF_DEPRECATED, the following +procedure is followed for removal: + +1. Any relevant information for deprecated kfuncs is documented in the kfunc's + kernel docs. This documentation will typically include the kfunc's expected + remaining lifespan, a recommendation for new functionality that can replace + the usage of the deprecated function (or an explanation as to why no such + replacement exists), etc. + +2. The deprecated kfunc is kept in the kernel for some period of time after it + was first marked as deprecated. This time period will be chosen on a + case-by-case basis, and will typically depend on how widespread the use of + the kfunc is, how long it has been in the kernel, and how hard it is to move + to alternatives. This deprecation time period is "best effort", and as + described :ref:`above<BPF_kfunc_lifecycle_expectations>`, circumstances may + sometimes dictate that the kfunc be removed before the full intended + deprecation period has elapsed. + +3. After the deprecation period the kfunc will be removed. At this point, BPF + programs calling the kfunc will be rejected by the verifier. + +4. Core kfuncs ============== The BPF subsystem provides a number of "core" kfuncs that are potentially applicable to a wide variety of different possible use cases and programs. Those kfuncs are documented here. -3.1 struct task_struct * kfuncs +4.1 struct task_struct * kfuncs ------------------------------- There are a number of kfuncs that allow ``struct task_struct *`` objects to be @@ -373,7 +502,7 @@ Here is an example of it being used: return 0; } -3.2 struct cgroup * kfuncs +4.2 struct cgroup * kfuncs -------------------------- ``struct cgroup *`` objects also have acquire and release functions: @@ -488,7 +617,7 @@ the verifier. bpf_cgroup_ancestor() can be used as follows: return 0; } -3.3 struct cpumask * kfuncs +4.3 struct cpumask * kfuncs --------------------------- BPF provides a set of kfuncs that can be used to query, allocate, mutate, and diff --git a/Documentation/bpf/libbpf/libbpf_naming_convention.rst b/Documentation/bpf/libbpf/libbpf_naming_convention.rst index c5ac97f3d4c4..b5b41b61b3c0 100644 --- a/Documentation/bpf/libbpf/libbpf_naming_convention.rst +++ b/Documentation/bpf/libbpf/libbpf_naming_convention.rst @@ -83,8 +83,8 @@ This prevents from accidentally exporting a symbol, that is not supposed to be a part of ABI what, in turn, improves both libbpf developer- and user-experiences. -ABI versionning ---------------- +ABI versioning +-------------- To make future ABI extensions possible libbpf ABI is versioned. Versioning is implemented by ``libbpf.map`` version script that is @@ -148,7 +148,7 @@ API documentation convention The libbpf API is documented via comments above definitions in header files. These comments can be rendered by doxygen and sphinx for well organized html output. This section describes the -convention in which these comments should be formated. +convention in which these comments should be formatted. Here is an example from btf.h: diff --git a/Documentation/bpf/map_xskmap.rst b/Documentation/bpf/map_xskmap.rst index 7093b8208451..dc143edd9233 100644 --- a/Documentation/bpf/map_xskmap.rst +++ b/Documentation/bpf/map_xskmap.rst @@ -178,7 +178,7 @@ The following code snippet shows how to update an XSKMAP with an XSK entry. For an example on how create AF_XDP sockets, please see the AF_XDP-example and AF_XDP-forwarding programs in the `bpf-examples`_ directory in the `libxdp`_ repository. -For a detailed explaination of the AF_XDP interface please see: +For a detailed explanation of the AF_XDP interface please see: - `libxdp-readme`_. - `AF_XDP`_ kernel documentation. diff --git a/Documentation/bpf/ringbuf.rst b/Documentation/bpf/ringbuf.rst index 6a615cd62bda..a99cd05d79d4 100644 --- a/Documentation/bpf/ringbuf.rst +++ b/Documentation/bpf/ringbuf.rst @@ -124,7 +124,7 @@ buffer. Currently 4 are supported: - ``BPF_RB_AVAIL_DATA`` returns amount of unconsumed data in ring buffer; - ``BPF_RB_RING_SIZE`` returns the size of ring buffer; -- ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical possition +- ``BPF_RB_CONS_POS``/``BPF_RB_PROD_POS`` returns current logical position of consumer/producer, respectively. Returned values are momentarily snapshots of ring buffer state and could be @@ -146,7 +146,7 @@ Design and Implementation This reserve/commit schema allows a natural way for multiple producers, either on different CPUs or even on the same CPU/in the same BPF program, to reserve independent records and work with them without blocking other producers. This -means that if BPF program was interruped by another BPF program sharing the +means that if BPF program was interrupted by another BPF program sharing the same ring buffer, they will both get a record reserved (provided there is enough space left) and can work with it and submit it independently. This applies to NMI context as well, except that due to using a spinlock during diff --git a/Documentation/bpf/verifier.rst b/Documentation/bpf/verifier.rst index d4326caf01f9..f0ec19db301c 100644 --- a/Documentation/bpf/verifier.rst +++ b/Documentation/bpf/verifier.rst @@ -192,7 +192,7 @@ checked and found to be non-NULL, all copies can become PTR_TO_MAP_VALUEs. As well as range-checking, the tracked information is also used for enforcing alignment of pointer accesses. For instance, on most systems the packet pointer is 2 bytes after a 4-byte alignment. If a program adds 14 bytes to that to jump -over the Ethernet header, then reads IHL and addes (IHL * 4), the resulting +over the Ethernet header, then reads IHL and adds (IHL * 4), the resulting pointer will have a variable offset known to be 4n+2 for some n, so adding the 2 bytes (NET_IP_ALIGN) gives a 4-byte alignment and so word-sized accesses through that pointer are safe. @@ -316,6 +316,301 @@ Pruning considers not only the registers but also the stack (and any spilled registers it may hold). They must all be safe for the branch to be pruned. This is implemented in states_equal(). +Some technical details about state pruning implementation could be found below. + +Register liveness tracking +-------------------------- + +In order to make state pruning effective, liveness state is tracked for each +register and stack slot. The basic idea is to track which registers and stack +slots are actually used during subseqeuent execution of the program, until +program exit is reached. Registers and stack slots that were never used could be +removed from the cached state thus making more states equivalent to a cached +state. This could be illustrated by the following program:: + + 0: call bpf_get_prandom_u32() + 1: r1 = 0 + 2: if r0 == 0 goto +1 + 3: r0 = 1 + --- checkpoint --- + 4: r0 = r1 + 5: exit + +Suppose that a state cache entry is created at instruction #4 (such entries are +also called "checkpoints" in the text below). The verifier could reach the +instruction with one of two possible register states: + +* r0 = 1, r1 = 0 +* r0 = 0, r1 = 0 + +However, only the value of register ``r1`` is important to successfully finish +verification. The goal of the liveness tracking algorithm is to spot this fact +and figure out that both states are actually equivalent. + +Data structures +~~~~~~~~~~~~~~~ + +Liveness is tracked using the following data structures:: + + enum bpf_reg_liveness { + REG_LIVE_NONE = 0, + REG_LIVE_READ32 = 0x1, + REG_LIVE_READ64 = 0x2, + REG_LIVE_READ = REG_LIVE_READ32 | REG_LIVE_READ64, + REG_LIVE_WRITTEN = 0x4, + REG_LIVE_DONE = 0x8, + }; + + struct bpf_reg_state { + ... + struct bpf_reg_state *parent; + ... + enum bpf_reg_liveness live; + ... + }; + + struct bpf_stack_state { + struct bpf_reg_state spilled_ptr; + ... + }; + + struct bpf_func_state { + struct bpf_reg_state regs[MAX_BPF_REG]; + ... + struct bpf_stack_state *stack; + } + + struct bpf_verifier_state { + struct bpf_func_state *frame[MAX_CALL_FRAMES]; + struct bpf_verifier_state *parent; + ... + } + +* ``REG_LIVE_NONE`` is an initial value assigned to ``->live`` fields upon new + verifier state creation; + +* ``REG_LIVE_WRITTEN`` means that the value of the register (or stack slot) is + defined by some instruction verified between this verifier state's parent and + verifier state itself; + +* ``REG_LIVE_READ{32,64}`` means that the value of the register (or stack slot) + is read by a some child state of this verifier state; + +* ``REG_LIVE_DONE`` is a marker used by ``clean_verifier_state()`` to avoid + processing same verifier state multiple times and for some sanity checks; + +* ``->live`` field values are formed by combining ``enum bpf_reg_liveness`` + values using bitwise or. + +Register parentage chains +~~~~~~~~~~~~~~~~~~~~~~~~~ + +In order to propagate information between parent and child states, a *register +parentage chain* is established. Each register or stack slot is linked to a +corresponding register or stack slot in its parent state via a ``->parent`` +pointer. This link is established upon state creation in ``is_state_visited()`` +and might be modified by ``set_callee_state()`` called from +``__check_func_call()``. + +The rules for correspondence between registers / stack slots are as follows: + +* For the current stack frame, registers and stack slots of the new state are + linked to the registers and stack slots of the parent state with the same + indices. + +* For the outer stack frames, only caller saved registers (r6-r9) and stack + slots are linked to the registers and stack slots of the parent state with the + same indices. + +* When function call is processed a new ``struct bpf_func_state`` instance is + allocated, it encapsulates a new set of registers and stack slots. For this + new frame, parent links for r6-r9 and stack slots are set to nil, parent links + for r1-r5 are set to match caller r1-r5 parent links. + +This could be illustrated by the following diagram (arrows stand for +``->parent`` pointers):: + + ... ; Frame #0, some instructions + --- checkpoint #0 --- + 1 : r6 = 42 ; Frame #0 + --- checkpoint #1 --- + 2 : call foo() ; Frame #0 + ... ; Frame #1, instructions from foo() + --- checkpoint #2 --- + ... ; Frame #1, instructions from foo() + --- checkpoint #3 --- + exit ; Frame #1, return from foo() + 3 : r1 = r6 ; Frame #0 <- current state + + +-------------------------------+-------------------------------+ + | Frame #0 | Frame #1 | + Checkpoint +-------------------------------+-------------------------------+ + #0 | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + ^ ^ ^ ^ + | | | | + Checkpoint +-------------------------------+ + #1 | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + ^ ^ ^ + |_______|_______|_______________ + | | | + nil nil | | | nil nil + | | | | | | | + Checkpoint +-------------------------------+-------------------------------+ + #2 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+-------------------------------+ + ^ ^ ^ ^ ^ + nil nil | | | | | + | | | | | | | + Checkpoint +-------------------------------+-------------------------------+ + #3 | r0 | r1-r5 | r6-r9 | fp-8 ... | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+-------------------------------+ + ^ ^ + nil nil | | + | | | | + Current +-------------------------------+ + state | r0 | r1-r5 | r6-r9 | fp-8 ... | + +-------------------------------+ + \ + r6 read mark is propagated via these links + all the way up to checkpoint #1. + The checkpoint #1 contains a write mark for r6 + because of instruction (1), thus read propagation + does not reach checkpoint #0 (see section below). + +Liveness marks tracking +~~~~~~~~~~~~~~~~~~~~~~~ + +For each processed instruction, the verifier tracks read and written registers +and stack slots. The main idea of the algorithm is that read marks propagate +back along the state parentage chain until they hit a write mark, which 'screens +off' earlier states from the read. The information about reads is propagated by +function ``mark_reg_read()`` which could be summarized as follows:: + + mark_reg_read(struct bpf_reg_state *state, ...): + parent = state->parent + while parent: + if state->live & REG_LIVE_WRITTEN: + break + if parent->live & REG_LIVE_READ64: + break + parent->live |= REG_LIVE_READ64 + state = parent + parent = state->parent + +Notes: + +* The read marks are applied to the **parent** state while write marks are + applied to the **current** state. The write mark on a register or stack slot + means that it is updated by some instruction in the straight-line code leading + from the parent state to the current state. + +* Details about REG_LIVE_READ32 are omitted. + +* Function ``propagate_liveness()`` (see section :ref:`read_marks_for_cache_hits`) + might override the first parent link. Please refer to the comments in the + ``propagate_liveness()`` and ``mark_reg_read()`` source code for further + details. + +Because stack writes could have different sizes ``REG_LIVE_WRITTEN`` marks are +applied conservatively: stack slots are marked as written only if write size +corresponds to the size of the register, e.g. see function ``save_register_state()``. + +Consider the following example:: + + 0: (*u64)(r10 - 8) = 0 ; define 8 bytes of fp-8 + --- checkpoint #0 --- + 1: (*u32)(r10 - 8) = 1 ; redefine lower 4 bytes + 2: r1 = (*u32)(r10 - 8) ; read lower 4 bytes defined at (1) + 3: r2 = (*u32)(r10 - 4) ; read upper 4 bytes defined at (0) + +As stated above, the write at (1) does not count as ``REG_LIVE_WRITTEN``. Should +it be otherwise, the algorithm above wouldn't be able to propagate the read mark +from (3) to checkpoint #0. + +Once the ``BPF_EXIT`` instruction is reached ``update_branch_counts()`` is +called to update the ``->branches`` counter for each verifier state in a chain +of parent verifier states. When the ``->branches`` counter reaches zero the +verifier state becomes a valid entry in a set of cached verifier states. + +Each entry of the verifier states cache is post-processed by a function +``clean_live_states()``. This function marks all registers and stack slots +without ``REG_LIVE_READ{32,64}`` marks as ``NOT_INIT`` or ``STACK_INVALID``. +Registers/stack slots marked in this way are ignored in function ``stacksafe()`` +called from ``states_equal()`` when a state cache entry is considered for +equivalence with a current state. + +Now it is possible to explain how the example from the beginning of the section +works:: + + 0: call bpf_get_prandom_u32() + 1: r1 = 0 + 2: if r0 == 0 goto +1 + 3: r0 = 1 + --- checkpoint[0] --- + 4: r0 = r1 + 5: exit + +* At instruction #2 branching point is reached and state ``{ r0 == 0, r1 == 0, pc == 4 }`` + is pushed to states processing queue (pc stands for program counter). + +* At instruction #4: + + * ``checkpoint[0]`` states cache entry is created: ``{ r0 == 1, r1 == 0, pc == 4 }``; + * ``checkpoint[0].r0`` is marked as written; + * ``checkpoint[0].r1`` is marked as read; + +* At instruction #5 exit is reached and ``checkpoint[0]`` can now be processed + by ``clean_live_states()``. After this processing ``checkpoint[0].r0`` has a + read mark and all other registers and stack slots are marked as ``NOT_INIT`` + or ``STACK_INVALID`` + +* The state ``{ r0 == 0, r1 == 0, pc == 4 }`` is popped from the states queue + and is compared against a cached state ``{ r1 == 0, pc == 4 }``, the states + are considered equivalent. + +.. _read_marks_for_cache_hits: + +Read marks propagation for cache hits +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Another point is the handling of read marks when a previously verified state is +found in the states cache. Upon cache hit verifier must behave in the same way +as if the current state was verified to the program exit. This means that all +read marks, present on registers and stack slots of the cached state, must be +propagated over the parentage chain of the current state. Example below shows +why this is important. Function ``propagate_liveness()`` handles this case. + +Consider the following state parentage chain (S is a starting state, A-E are +derived states, -> arrows show which state is derived from which):: + + r1 read + <------------- A[r1] == 0 + C[r1] == 0 + S ---> A ---> B ---> exit E[r1] == 1 + | + ` ---> C ---> D + | + ` ---> E ^ + |___ suppose all these + ^ states are at insn #Y + | + suppose all these + states are at insn #X + +* Chain of states ``S -> A -> B -> exit`` is verified first. + +* While ``B -> exit`` is verified, register ``r1`` is read and this read mark is + propagated up to state ``A``. + +* When chain of states ``C -> D`` is verified the state ``D`` turns out to be + equivalent to state ``B``. + +* The read mark for ``r1`` has to be propagated to state ``C``, otherwise state + ``C`` might get mistakenly marked as equivalent to state ``E`` even though + values for register ``r1`` differ between ``C`` and ``E``. + Understanding eBPF verifier messages ==================================== diff --git a/Documentation/conf.py b/Documentation/conf.py index d927737e3c10..8b4e5451a02d 100644 --- a/Documentation/conf.py +++ b/Documentation/conf.py @@ -116,6 +116,9 @@ if major >= 3: # include/linux/linkage.h: "asmlinkage", + + # include/linux/btf.h + "__bpf_kfunc", ] else: diff --git a/Documentation/devicetree/bindings/.gitignore b/Documentation/devicetree/bindings/.gitignore index a77719968a7e..51ddb26d93f0 100644 --- a/Documentation/devicetree/bindings/.gitignore +++ b/Documentation/devicetree/bindings/.gitignore @@ -2,3 +2,8 @@ *.example.dts /processed-schema*.yaml /processed-schema*.json + +# +# We don't want to ignore the following even if they are dot-files +# +!.yamllint diff --git a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml index 9f7d3e11aacb..8449e14af9f3 100644 --- a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml +++ b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.yaml @@ -108,7 +108,7 @@ properties: msi-controller: description: - Only present if the Message Based Interrupt functionnality is + Only present if the Message Based Interrupt functionality is being exposed by the HW, and the mbi-ranges property present. mbi-ranges: diff --git a/Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml b/Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml index 0a7aa29563c1..21c8ea08ff0a 100644 --- a/Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml +++ b/Documentation/devicetree/bindings/rtc/qcom-pm8xxx-rtc.yaml @@ -40,6 +40,8 @@ properties: description: Indicates that the setting of RTC time is allowed by the host CPU. + wakeup-source: true + required: - compatible - reg diff --git a/Documentation/isdn/interface_capi.rst b/Documentation/isdn/interface_capi.rst index fe2421444b76..4d63b34b35cf 100644 --- a/Documentation/isdn/interface_capi.rst +++ b/Documentation/isdn/interface_capi.rst @@ -323,7 +323,7 @@ If the lowest bit of showcapimsgs is set, kernelcapi logs controller and application up and down events. In addition, every registered CAPI controller has an associated traceflag -parameter controlling how CAPI messages sent from and to tha controller are +parameter controlling how CAPI messages sent from and to the controller are logged. The traceflag parameter is initialized with the value of the showcapimsgs parameter when the controller is registered, but can later be changed via the MANUFACTURER_REQ command KCAPI_CMD_TRACE. diff --git a/Documentation/isdn/m_isdn.rst b/Documentation/isdn/m_isdn.rst index 9957de349e69..5847a164287e 100644 --- a/Documentation/isdn/m_isdn.rst +++ b/Documentation/isdn/m_isdn.rst @@ -3,7 +3,7 @@ mISDN Driver ============ mISDN is a new modular ISDN driver, in the long term it should replace -the old I4L driver architecture for passiv ISDN cards. +the old I4L driver architecture for passive ISDN cards. It was designed to allow a broad range of applications and interfaces but only have the basic function in kernel, the interface to the user space is based on sockets with a own address family AF_ISDN. diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml new file mode 100644 index 000000000000..b4dcdae54ffd --- /dev/null +++ b/Documentation/netlink/specs/netdev.yaml @@ -0,0 +1,100 @@ +name: netdev + +doc: + netdev configuration over generic netlink. + +definitions: + - + type: flags + name: xdp-act + entries: + - + name: basic + doc: + XDP feautues set supported by all drivers + (XDP_ABORTED, XDP_DROP, XDP_PASS, XDP_TX) + - + name: redirect + doc: + The netdev supports XDP_REDIRECT + - + name: ndo-xmit + doc: + This feature informs if netdev implements ndo_xdp_xmit callback. + - + name: xsk-zerocopy + doc: + This feature informs if netdev supports AF_XDP in zero copy mode. + - + name: hw-offload + doc: + This feature informs if netdev supports XDP hw oflloading. + - + name: rx-sg + doc: + This feature informs if netdev implements non-linear XDP buffer + support in the driver napi callback. + - + name: ndo-xmit-sg + doc: + This feature informs if netdev implements non-linear XDP buffer + support in ndo_xdp_xmit callback. + +attribute-sets: + - + name: dev + attributes: + - + name: ifindex + doc: netdev ifindex + type: u32 + value: 1 + checks: + min: 1 + - + name: pad + type: pad + - + name: xdp-features + doc: Bitmask of enabled xdp-features. + type: u64 + enum: xdp-act + enum-as-flags: true + +operations: + list: + - + name: dev-get + doc: Get / dump information about a netdev. + value: 1 + attribute-set: dev + do: + request: + attributes: + - ifindex + reply: &dev-all + attributes: + - ifindex + - xdp-features + dump: + reply: *dev-all + - + name: dev-add-ntf + doc: Notification about device appearing. + notify: dev-get + mcgrp: mgmt + - + name: dev-del-ntf + doc: Notification about device disappearing. + notify: dev-get + mcgrp: mgmt + - + name: dev-change-ntf + doc: Notification about device configuration being changed. + notify: dev-get + mcgrp: mgmt + +mcast-groups: + list: + - + name: mgmt diff --git a/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst b/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst index eaa87dbe8848..d052ef40fe36 100644 --- a/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst +++ b/Documentation/networking/device_drivers/ethernet/wangxun/txgbe.rst @@ -16,5 +16,5 @@ Contents Support ======= -If you got any problem, contact Wangxun support team via [email protected] +If you got any problem, contact Wangxun support team via [email protected] and Cc: netdev. diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 9807b05a1b57..0a67cb738013 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -8070,9 +8070,13 @@ considering the state as complete. VMM needs to ensure that the dirty state is final and avoid missing dirty pages from another ioctl ordered after the bitmap collection. -NOTE: One example of using the backup bitmap is saving arm64 vgic/its -tables through KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} command on -KVM device "kvm-arm-vgic-its" when dirty ring is enabled. +NOTE: Multiple examples of using the backup bitmap: (1) save vgic/its +tables through command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_SAVE_TABLES} on +KVM device "kvm-arm-vgic-its". (2) restore vgic/its tables through +command KVM_DEV_ARM_{VGIC_GRP_CTRL, ITS_RESTORE_TABLES} on KVM device +"kvm-arm-vgic-its". VGICv3 LPI pending status is restored. (3) save +vgic3 pending table through KVM_DEV_ARM_VGIC_{GRP_CTRL, SAVE_PENDING_TABLES} +command on KVM device "kvm-arm-vgic-v3". 8.30 KVM_CAP_XEN_HVM -------------------- diff --git a/MAINTAINERS b/MAINTAINERS index 2cf9eb43ed8f..f2bd469ffae5 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -15679,7 +15679,7 @@ OPENRISC ARCHITECTURE M: Jonas Bonn <[email protected]> M: Stefan Kristiansson <[email protected]> M: Stafford Horne <[email protected]> S: Maintained W: http://openrisc.io T: git https://github.com/openrisc/linux.git @@ -21752,6 +21752,7 @@ F: include/uapi/linux/uvcvideo.h USB WEBCAM GADGET M: Laurent Pinchart <[email protected]> +M: Daniel Scally <[email protected]> S: Maintained F: drivers/usb/gadget/function/*uvc* @@ -2,7 +2,7 @@ VERSION = 6 PATCHLEVEL = 2 SUBLEVEL = 0 -EXTRAVERSION = -rc6 +EXTRAVERSION = -rc7 NAME = Hurr durr I'ma ninja sloth # *DOCUMENTATION* diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c index 94a666dd1443..2642e9ce2819 100644 --- a/arch/arm64/kvm/vgic/vgic-its.c +++ b/arch/arm64/kvm/vgic/vgic-its.c @@ -2187,7 +2187,7 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev, ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) | ite->collection->collection_id; val = cpu_to_le64(val); - return kvm_write_guest_lock(kvm, gpa, &val, ite_esz); + return vgic_write_guest_lock(kvm, gpa, &val, ite_esz); } /** @@ -2339,7 +2339,7 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev, (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) | (dev->num_eventid_bits - 1)); val = cpu_to_le64(val); - return kvm_write_guest_lock(kvm, ptr, &val, dte_esz); + return vgic_write_guest_lock(kvm, ptr, &val, dte_esz); } /** @@ -2526,7 +2526,7 @@ static int vgic_its_save_cte(struct vgic_its *its, ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) | collection->collection_id); val = cpu_to_le64(val); - return kvm_write_guest_lock(its->dev->kvm, gpa, &val, esz); + return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz); } /* @@ -2607,7 +2607,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its) */ val = 0; BUG_ON(cte_esz > sizeof(val)); - ret = kvm_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); + ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz); return ret; } @@ -2743,7 +2743,6 @@ static int vgic_its_has_attr(struct kvm_device *dev, static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) { const struct vgic_its_abi *abi = vgic_its_get_abi(its); - struct vgic_dist *dist = &kvm->arch.vgic; int ret = 0; if (attr == KVM_DEV_ARM_VGIC_CTRL_INIT) /* Nothing to do */ @@ -2763,9 +2762,7 @@ static int vgic_its_ctrl(struct kvm *kvm, struct vgic_its *its, u64 attr) vgic_its_reset(kvm, its); break; case KVM_DEV_ARM_ITS_SAVE_TABLES: - dist->save_its_tables_in_progress = true; ret = abi->save_tables(its); - dist->save_its_tables_in_progress = false; break; case KVM_DEV_ARM_ITS_RESTORE_TABLES: ret = abi->restore_tables(its); @@ -2792,7 +2789,7 @@ bool kvm_arch_allow_write_without_running_vcpu(struct kvm *kvm) { struct vgic_dist *dist = &kvm->arch.vgic; - return dist->save_its_tables_in_progress; + return dist->table_write_in_progress; } static int vgic_its_set_attr(struct kvm_device *dev, diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index 2624963cb95b..684bdfaad4a9 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -339,7 +339,7 @@ retry: if (status) { /* clear consumed data */ val &= ~(1 << bit_nr); - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); if (ret) return ret; } @@ -434,7 +434,7 @@ int vgic_v3_save_pending_tables(struct kvm *kvm) else val &= ~(1 << bit_nr); - ret = kvm_write_guest_lock(kvm, ptr, &val, 1); + ret = vgic_write_guest_lock(kvm, ptr, &val, 1); if (ret) goto out; } diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h index 23e280fa0a16..7f7f3c5ed85a 100644 --- a/arch/arm64/kvm/vgic/vgic.h +++ b/arch/arm64/kvm/vgic/vgic.h @@ -6,6 +6,7 @@ #define __KVM_ARM_VGIC_NEW_H__ #include <linux/irqchip/arm-gic-common.h> +#include <asm/kvm_mmu.h> #define PRODUCT_ID_KVM 0x4b /* ASCII code K */ #define IMPLEMENTER_ARM 0x43b @@ -131,6 +132,19 @@ static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq) return vgic_irq_get_lr_count(irq) > 1; } +static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa, + const void *data, unsigned long len) +{ + struct vgic_dist *dist = &kvm->arch.vgic; + int ret; + + dist->table_write_in_progress = true; + ret = kvm_write_guest_lock(kvm, gpa, data, len); + dist->table_write_in_progress = false; + + return ret; +} + /* * This struct provides an intermediate representation of the fields contained * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC diff --git a/arch/ia64/kernel/sys_ia64.c b/arch/ia64/kernel/sys_ia64.c index f6a502e8f02c..6e948d015332 100644 --- a/arch/ia64/kernel/sys_ia64.c +++ b/arch/ia64/kernel/sys_ia64.c @@ -170,6 +170,9 @@ ia64_mremap (unsigned long addr, unsigned long old_len, unsigned long new_len, u asmlinkage long ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user *tp) { + struct timespec64 rtn_tp; + s64 tick_ns; + /* * ia64's clock_gettime() syscall is implemented as a vdso call * fsys_clock_gettime(). Currently it handles only @@ -185,8 +188,8 @@ ia64_clock_getres(const clockid_t which_clock, struct __kernel_timespec __user * switch (which_clock) { case CLOCK_REALTIME: case CLOCK_MONOTONIC: - s64 tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); - struct timespec64 rtn_tp = ns_to_timespec64(tick_ns); + tick_ns = DIV_ROUND_UP(NSEC_PER_SEC, local_cpu_data->itc_freq); + rtn_tp = ns_to_timespec64(tick_ns); return put_timespec64(&rtn_tp, tp); } diff --git a/arch/parisc/kernel/firmware.c b/arch/parisc/kernel/firmware.c index 4dfe1f49c5c8..6817892a2c58 100644 --- a/arch/parisc/kernel/firmware.c +++ b/arch/parisc/kernel/firmware.c @@ -1303,7 +1303,7 @@ static char iodc_dbuf[4096] __page_aligned_bss; */ int pdc_iodc_print(const unsigned char *str, unsigned count) { - unsigned int i; + unsigned int i, found = 0; unsigned long flags; count = min_t(unsigned int, count, sizeof(iodc_dbuf)); @@ -1315,6 +1315,7 @@ int pdc_iodc_print(const unsigned char *str, unsigned count) iodc_dbuf[i+0] = '\r'; iodc_dbuf[i+1] = '\n'; i += 2; + found = 1; goto print; default: iodc_dbuf[i] = str[i]; @@ -1330,7 +1331,7 @@ print: __pa(pdc_result), 0, __pa(iodc_dbuf), i, 0); spin_unlock_irqrestore(&pdc_lock, flags); - return i; + return i - found; } #if !defined(BOOTLOADER) diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c index 69c62933e952..ceb45f51d52e 100644 --- a/arch/parisc/kernel/ptrace.c +++ b/arch/parisc/kernel/ptrace.c @@ -126,6 +126,12 @@ long arch_ptrace(struct task_struct *child, long request, unsigned long tmp; long ret = -EIO; + unsigned long user_regs_struct_size = sizeof(struct user_regs_struct); +#ifdef CONFIG_64BIT + if (is_compat_task()) + user_regs_struct_size /= 2; +#endif + switch (request) { /* Read the word at location addr in the USER area. For ptraced @@ -166,7 +172,7 @@ long arch_ptrace(struct task_struct *child, long request, addr >= sizeof(struct pt_regs)) break; if (addr == PT_IAOQ0 || addr == PT_IAOQ1) { - data |= 3; /* ensure userspace privilege */ + data |= PRIV_USER; /* ensure userspace privilege */ } if ((addr >= PT_GR1 && addr <= PT_GR31) || addr == PT_IAOQ0 || addr == PT_IAOQ1 || @@ -181,14 +187,14 @@ long arch_ptrace(struct task_struct *child, long request, return copy_regset_to_user(child, task_user_regset_view(current), REGSET_GENERAL, - 0, sizeof(struct user_regs_struct), + 0, user_regs_struct_size, datap); case PTRACE_SETREGS: /* Set all gp regs in the child. */ return copy_regset_from_user(child, task_user_regset_view(current), REGSET_GENERAL, - 0, sizeof(struct user_regs_struct), + 0, user_regs_struct_size, datap); case PTRACE_GETFPREGS: /* Get the child FPU state. */ @@ -285,7 +291,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, if (addr >= sizeof(struct pt_regs)) break; if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) { - data |= 3; /* ensure userspace privilege */ + data |= PRIV_USER; /* ensure userspace privilege */ } if (addr >= PT_FR0 && addr <= PT_FR31 + 4) { /* Special case, fp regs are 64 bits anyway */ @@ -302,6 +308,11 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, } } break; + case PTRACE_GETREGS: + case PTRACE_SETREGS: + case PTRACE_GETFPREGS: + case PTRACE_SETFPREGS: + return arch_ptrace(child, request, addr, data); default: ret = compat_ptrace_request(child, request, addr, data); @@ -484,7 +495,7 @@ static void set_reg(struct pt_regs *regs, int num, unsigned long val) case RI(iaoq[0]): case RI(iaoq[1]): /* set 2 lowest bits to ensure userspace privilege: */ - regs->iaoq[num - RI(iaoq[0])] = val | 3; + regs->iaoq[num - RI(iaoq[0])] = val | PRIV_USER; return; case RI(sar): regs->sar = val; return; diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h index dd39313242b4..d5cd16270c5d 100644 --- a/arch/powerpc/include/asm/book3s/64/tlbflush.h +++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h @@ -97,6 +97,8 @@ static inline void tlb_flush(struct mmu_gather *tlb) { if (radix_enabled()) radix__tlb_flush(tlb); + + return hash__tlb_flush(tlb); } #ifdef CONFIG_SMP diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h index 77fa88c2aed0..eb6d094083fd 100644 --- a/arch/powerpc/include/asm/hw_irq.h +++ b/arch/powerpc/include/asm/hw_irq.h @@ -173,6 +173,15 @@ static inline notrace unsigned long irq_soft_mask_or_return(unsigned long mask) return flags; } +static inline notrace unsigned long irq_soft_mask_andc_return(unsigned long mask) +{ + unsigned long flags = irq_soft_mask_return(); + + irq_soft_mask_set(flags & ~mask); + + return flags; +} + static inline unsigned long arch_local_save_flags(void) { return irq_soft_mask_return(); @@ -192,7 +201,7 @@ static inline void arch_local_irq_enable(void) static inline unsigned long arch_local_irq_save(void) { - return irq_soft_mask_set_return(IRQS_DISABLED); + return irq_soft_mask_or_return(IRQS_DISABLED); } static inline bool arch_irqs_disabled_flags(unsigned long flags) @@ -331,10 +340,11 @@ bool power_pmu_wants_prompt_pmi(void); * is a different soft-masked interrupt pending that requires hard * masking. */ -static inline bool should_hard_irq_enable(void) +static inline bool should_hard_irq_enable(struct pt_regs *regs) { if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); + WARN_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED); + WARN_ON(!(get_paca()->irq_happened & PACA_IRQ_HARD_DIS)); WARN_ON(mfmsr() & MSR_EE); } @@ -347,8 +357,17 @@ static inline bool should_hard_irq_enable(void) * * TODO: Add test for 64e */ - if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !power_pmu_wants_prompt_pmi()) - return false; + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) { + if (!power_pmu_wants_prompt_pmi()) + return false; + /* + * If PMIs are disabled then IRQs should be disabled as well, + * so we shouldn't see this condition, check for it just in + * case because we are about to enable PMIs. + */ + if (WARN_ON_ONCE(regs->softe & IRQS_PMI_DISABLED)) + return false; + } if (get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK) return false; @@ -358,18 +377,16 @@ static inline bool should_hard_irq_enable(void) /* * Do the hard enabling, only call this if should_hard_irq_enable is true. + * This allows PMI interrupts to profile irq handlers. */ static inline void do_hard_irq_enable(void) { - if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG)) { - WARN_ON(irq_soft_mask_return() == IRQS_ENABLED); - WARN_ON(get_paca()->irq_happened & PACA_IRQ_MUST_HARD_MASK); - WARN_ON(mfmsr() & MSR_EE); - } /* - * This allows PMI interrupts (and watchdog soft-NMIs) through. - * There is no other reason to enable this way. + * Asynch interrupts come in with IRQS_ALL_DISABLED, + * PACA_IRQ_HARD_DIS, and MSR[EE]=0. */ + if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) + irq_soft_mask_andc_return(IRQS_PMI_DISABLED); get_paca()->irq_happened &= ~PACA_IRQ_HARD_DIS; __hard_irq_enable(); } @@ -452,7 +469,7 @@ static inline bool arch_irq_disabled_regs(struct pt_regs *regs) return !(regs->msr & MSR_EE); } -static __always_inline bool should_hard_irq_enable(void) +static __always_inline bool should_hard_irq_enable(struct pt_regs *regs) { return false; } diff --git a/arch/powerpc/kernel/dbell.c b/arch/powerpc/kernel/dbell.c index f55c6fb34a3a..5712dd846263 100644 --- a/arch/powerpc/kernel/dbell.c +++ b/arch/powerpc/kernel/dbell.c @@ -27,7 +27,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(doorbell_exception) ppc_msgsync(); - if (should_hard_irq_enable()) + if (should_hard_irq_enable(regs)) do_hard_irq_enable(); kvmppc_clear_host_ipi(smp_processor_id()); diff --git a/arch/powerpc/kernel/head_85xx.S b/arch/powerpc/kernel/head_85xx.S index d438ca74e96c..fdbee1093e2b 100644 --- a/arch/powerpc/kernel/head_85xx.S +++ b/arch/powerpc/kernel/head_85xx.S @@ -864,7 +864,7 @@ _GLOBAL(load_up_spe) * SPE unavailable trap from kernel - print a message, but let * the task use SPE in the kernel until it returns to user mode. */ -KernelSPE: +SYM_FUNC_START_LOCAL(KernelSPE) lwz r3,_MSR(r1) oris r3,r3,MSR_SPE@h stw r3,_MSR(r1) /* enable use of SPE after return */ @@ -881,6 +881,7 @@ KernelSPE: #endif .align 4,0 +SYM_FUNC_END(KernelSPE) #endif /* CONFIG_SPE */ /* diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c index c5b9ce887483..c9535f2760b5 100644 --- a/arch/powerpc/kernel/irq.c +++ b/arch/powerpc/kernel/irq.c @@ -238,7 +238,7 @@ static void __do_irq(struct pt_regs *regs, unsigned long oldsp) irq = static_call(ppc_get_irq)(); /* We can hard enable interrupts now to allow perf interrupts */ - if (should_hard_irq_enable()) + if (should_hard_irq_enable(regs)) do_hard_irq_enable(); /* And finally process it */ diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c index d68de3618741..e26eb6618ae5 100644 --- a/arch/powerpc/kernel/time.c +++ b/arch/powerpc/kernel/time.c @@ -515,7 +515,7 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(timer_interrupt) } /* Conditionally hard-enable interrupts. */ - if (should_hard_irq_enable()) { + if (should_hard_irq_enable(regs)) { /* * Ensure a positive value is written to the decrementer, or * else some CPUs will continue to take decrementer exceptions. diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c index af8854f9eae3..52085751f5f4 100644 --- a/arch/powerpc/kexec/file_load_64.c +++ b/arch/powerpc/kexec/file_load_64.c @@ -989,10 +989,13 @@ unsigned int kexec_extra_fdt_size_ppc64(struct kimage *image) * linux,drconf-usable-memory properties. Get an approximate on the * number of usable memory entries and use for FDT size estimation. */ - usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) + - (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); - - extra_size = (unsigned int)(usm_entries * sizeof(u64)); + if (drmem_lmb_size()) { + usm_entries = ((memory_hotplug_max() / drmem_lmb_size()) + + (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); + extra_size = (unsigned int)(usm_entries * sizeof(u64)); + } else { + extra_size = 0; + } /* * Get the number of CPU nodes in the current DT. This allows to diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 0dce93ccaadf..e89281d3ba28 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -912,16 +912,15 @@ static int kvmppc_handle_debug(struct kvm_vcpu *vcpu) static void kvmppc_fill_pt_regs(struct pt_regs *regs) { - ulong r1, ip, msr, lr; + ulong r1, msr, lr; asm("mr %0, 1" : "=r"(r1)); asm("mflr %0" : "=r"(lr)); asm("mfmsr %0" : "=r"(msr)); - asm("bl 1f; 1: mflr %0" : "=r"(ip)); memset(regs, 0, sizeof(*regs)); regs->gpr[1] = r1; - regs->nip = ip; + regs->nip = _THIS_IP_; regs->msr = msr; regs->link = lr; } diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c index cac727b01799..26245aaf12b8 100644 --- a/arch/powerpc/mm/book3s64/radix_pgtable.c +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c @@ -234,6 +234,14 @@ void radix__mark_rodata_ro(void) end = (unsigned long)__end_rodata; radix__change_memory_range(start, end, _PAGE_WRITE); + + for (start = PAGE_OFFSET; start < (unsigned long)_stext; start += PAGE_SIZE) { + end = start + PAGE_SIZE; + if (overlaps_interrupt_vector_text(start, end)) + radix__change_memory_range(start, end, _PAGE_WRITE); + else + break; + } } void radix__mark_initmem_nx(void) @@ -262,6 +270,22 @@ print_mapping(unsigned long start, unsigned long end, unsigned long size, bool e static unsigned long next_boundary(unsigned long addr, unsigned long end) { #ifdef CONFIG_STRICT_KERNEL_RWX + unsigned long stext_phys; + + stext_phys = __pa_symbol(_stext); + + // Relocatable kernel running at non-zero real address + if (stext_phys != 0) { + // The end of interrupts code at zero is a rodata boundary + unsigned long end_intr = __pa_symbol(__end_interrupts) - stext_phys; + if (addr < end_intr) + return end_intr; + + // Start of relocated kernel text is a rodata boundary + if (addr < stext_phys) + return stext_phys; + } + if (addr < __pa_symbol(__srwx_boundary)) return __pa_symbol(__srwx_boundary); #endif diff --git a/arch/powerpc/perf/imc-pmu.c b/arch/powerpc/perf/imc-pmu.c index 100e97daf76b..9d229ef7f86e 100644 --- a/arch/powerpc/perf/imc-pmu.c +++ b/arch/powerpc/perf/imc-pmu.c @@ -22,7 +22,7 @@ * Used to avoid races in counting the nest-pmu units during hotplug * register and unregister */ -static DEFINE_SPINLOCK(nest_init_lock); +static DEFINE_MUTEX(nest_init_lock); static DEFINE_PER_CPU(struct imc_pmu_ref *, local_nest_imc_refc); static struct imc_pmu **per_nest_pmu_arr; static cpumask_t nest_imc_cpumask; @@ -1629,7 +1629,7 @@ static void imc_common_mem_free(struct imc_pmu *pmu_ptr) static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr) { if (pmu_ptr->domain == IMC_DOMAIN_NEST) { - spin_lock(&nest_init_lock); + mutex_lock(&nest_init_lock); if (nest_pmus == 1) { cpuhp_remove_state(CPUHP_AP_PERF_POWERPC_NEST_IMC_ONLINE); kfree(nest_imc_refc); @@ -1639,7 +1639,7 @@ static void imc_common_cpuhp_mem_free(struct imc_pmu *pmu_ptr) if (nest_pmus > 0) nest_pmus--; - spin_unlock(&nest_init_lock); + mutex_unlock(&nest_init_lock); } /* Free core_imc memory */ @@ -1796,11 +1796,11 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id * rest. To handle the cpuhotplug callback unregister, we track * the number of nest pmus in "nest_pmus". */ - spin_lock(&nest_init_lock); + mutex_lock(&nest_init_lock); if (nest_pmus == 0) { ret = init_nest_pmu_ref(); if (ret) { - spin_unlock(&nest_init_lock); + mutex_unlock(&nest_init_lock); kfree(per_nest_pmu_arr); per_nest_pmu_arr = NULL; goto err_free_mem; @@ -1808,7 +1808,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id /* Register for cpu hotplug notification. */ ret = nest_pmu_cpumask_init(); if (ret) { - spin_unlock(&nest_init_lock); + mutex_unlock(&nest_init_lock); kfree(nest_imc_refc); kfree(per_nest_pmu_arr); per_nest_pmu_arr = NULL; @@ -1816,7 +1816,7 @@ int init_imc_pmu(struct device_node *parent, struct imc_pmu *pmu_ptr, int pmu_id } } nest_pmus++; - spin_unlock(&nest_init_lock); + mutex_unlock(&nest_init_lock); break; case IMC_DOMAIN_CORE: ret = core_imc_pmu_cpumask_init(); diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index faf2c2177094..82153960ac00 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -80,6 +80,9 @@ ifeq ($(CONFIG_PERF_EVENTS),y) KBUILD_CFLAGS += -fno-omit-frame-pointer endif +# Avoid generating .eh_frame sections. +KBUILD_CFLAGS += -fno-asynchronous-unwind-tables -fno-unwind-tables + KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax) KBUILD_AFLAGS_MODULE += $(call as-option,-Wa$(comma)-mno-relax) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index 86328e3acb02..64ad1937e714 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -70,7 +70,6 @@ static_assert(RISCV_ISA_EXT_ID_MAX <= RISCV_ISA_EXT_MAX); */ enum riscv_isa_ext_key { RISCV_ISA_EXT_KEY_FPU, /* For 'F' and 'D' */ - RISCV_ISA_EXT_KEY_ZIHINTPAUSE, RISCV_ISA_EXT_KEY_SVINVAL, RISCV_ISA_EXT_KEY_MAX, }; @@ -91,8 +90,6 @@ static __always_inline int riscv_isa_ext2key(int num) return RISCV_ISA_EXT_KEY_FPU; case RISCV_ISA_EXT_d: return RISCV_ISA_EXT_KEY_FPU; - case RISCV_ISA_EXT_ZIHINTPAUSE: - return RISCV_ISA_EXT_KEY_ZIHINTPAUSE; case RISCV_ISA_EXT_SVINVAL: return RISCV_ISA_EXT_KEY_SVINVAL; default: diff --git a/arch/riscv/include/asm/vdso/processor.h b/arch/riscv/include/asm/vdso/processor.h index fa70cfe507aa..14f5d27783b8 100644 --- a/arch/riscv/include/asm/vdso/processor.h +++ b/arch/riscv/include/asm/vdso/processor.h @@ -4,30 +4,26 @@ #ifndef __ASSEMBLY__ -#include <linux/jump_label.h> #include <asm/barrier.h> -#include <asm/hwcap.h> static inline void cpu_relax(void) { - if (!static_branch_likely(&riscv_isa_ext_keys[RISCV_ISA_EXT_KEY_ZIHINTPAUSE])) { #ifdef __riscv_muldiv - int dummy; - /* In lieu of a halt instruction, induce a long-latency stall. */ - __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); + int dummy; + /* In lieu of a halt instruction, induce a long-latency stall. */ + __asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy)); #endif - } else { - /* - * Reduce instruction retirement. - * This assumes the PC changes. - */ -#ifdef CONFIG_TOOLCHAIN_HAS_ZIHINTPAUSE - __asm__ __volatile__ ("pause"); + +#ifdef __riscv_zihintpause + /* + * Reduce instruction retirement. + * This assumes the PC changes. + */ + __asm__ __volatile__ ("pause"); #else - /* Encoding of the pause instruction */ - __asm__ __volatile__ (".4byte 0x100000F"); + /* Encoding of the pause instruction */ + __asm__ __volatile__ (".4byte 0x100000F"); #endif - } barrier(); } diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c index f21592d20306..41c7481afde3 100644 --- a/arch/riscv/kernel/probes/kprobes.c +++ b/arch/riscv/kernel/probes/kprobes.c @@ -48,6 +48,21 @@ static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs) post_kprobe_handler(p, kcb, regs); } +static bool __kprobes arch_check_kprobe(struct kprobe *p) +{ + unsigned long tmp = (unsigned long)p->addr - p->offset; + unsigned long addr = (unsigned long)p->addr; + + while (tmp <= addr) { + if (tmp == addr) + return true; + + tmp += GET_INSN_LENGTH(*(u16 *)tmp); + } + + return false; +} + int __kprobes arch_prepare_kprobe(struct kprobe *p) { unsigned long probe_addr = (unsigned long)p->addr; @@ -55,6 +70,9 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) if (probe_addr & 0x1) return -EILSEQ; + if (!arch_check_kprobe(p)) + return -EILSEQ; + /* copy instruction */ p->opcode = *p->addr; diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c index af35052d06ed..d0846ba818ee 100644 --- a/arch/s390/net/bpf_jit_comp.c +++ b/arch/s390/net/bpf_jit_comp.c @@ -30,6 +30,7 @@ #include <asm/facility.h> #include <asm/nospec-branch.h> #include <asm/set_memory.h> +#include <asm/text-patching.h> #include "bpf_jit.h" struct bpf_jit { @@ -50,12 +51,13 @@ struct bpf_jit { int r14_thunk_ip; /* Address of expoline thunk for 'br %r14' */ int tail_call_start; /* Tail call start offset */ int excnt; /* Number of exception table entries */ + int prologue_plt_ret; /* Return address for prologue hotpatch PLT */ + int prologue_plt; /* Start of prologue hotpatch PLT */ }; #define SEEN_MEM BIT(0) /* use mem[] for temporary storage */ #define SEEN_LITERAL BIT(1) /* code uses literals */ #define SEEN_FUNC BIT(2) /* calls C functions */ -#define SEEN_TAIL_CALL BIT(3) /* code uses tail calls */ #define SEEN_STACK (SEEN_FUNC | SEEN_MEM) /* @@ -68,6 +70,10 @@ struct bpf_jit { #define REG_0 REG_W0 /* Register 0 */ #define REG_1 REG_W1 /* Register 1 */ #define REG_2 BPF_REG_1 /* Register 2 */ +#define REG_3 BPF_REG_2 /* Register 3 */ +#define REG_4 BPF_REG_3 /* Register 4 */ +#define REG_7 BPF_REG_6 /* Register 7 */ +#define REG_8 BPF_REG_7 /* Register 8 */ #define REG_14 BPF_REG_0 /* Register 14 */ /* @@ -507,20 +513,58 @@ static void bpf_skip(struct bpf_jit *jit, int size) } /* + * PLT for hotpatchable calls. The calling convention is the same as for the + * ftrace hotpatch trampolines: %r0 is return address, %r1 is clobbered. + */ +extern const char bpf_plt[]; +extern const char bpf_plt_ret[]; +extern const char bpf_plt_target[]; +extern const char bpf_plt_end[]; +#define BPF_PLT_SIZE 32 +asm( + ".pushsection .rodata\n" + " .align 8\n" + "bpf_plt:\n" + " lgrl %r0,bpf_plt_ret\n" + " lgrl %r1,bpf_plt_target\n" + " br %r1\n" + " .align 8\n" + "bpf_plt_ret: .quad 0\n" + "bpf_plt_target: .quad 0\n" + "bpf_plt_end:\n" + " .popsection\n" +); + +static void bpf_jit_plt(void *plt, void *ret, void *target) +{ + memcpy(plt, bpf_plt, BPF_PLT_SIZE); + *(void **)((char *)plt + (bpf_plt_ret - bpf_plt)) = ret; + *(void **)((char *)plt + (bpf_plt_target - bpf_plt)) = target; +} + +/* * Emit function prologue * * Save registers and create stack frame if necessary. - * See stack frame layout desription in "bpf_jit.h"! + * See stack frame layout description in "bpf_jit.h"! */ -static void bpf_jit_prologue(struct bpf_jit *jit, u32 stack_depth) +static void bpf_jit_prologue(struct bpf_jit *jit, struct bpf_prog *fp, + u32 stack_depth) { - if (jit->seen & SEEN_TAIL_CALL) { + /* No-op for hotpatching */ + /* brcl 0,prologue_plt */ + EMIT6_PCREL_RILC(0xc0040000, 0, jit->prologue_plt); + jit->prologue_plt_ret = jit->prg; + + if (fp->aux->func_idx == 0) { + /* Initialize the tail call counter in the main program. */ /* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */ _EMIT6(0xd703f000 | STK_OFF_TCCNT, 0xf000 | STK_OFF_TCCNT); } else { /* - * There are no tail calls. Insert nops in order to have - * tail_call_start at a predictable offset. + * Skip the tail call counter initialization in subprograms. + * Insert nops in order to have tail_call_start at a + * predictable offset. */ bpf_skip(jit, 6); } @@ -558,6 +602,43 @@ static void bpf_jit_prologue(struct bpf_jit *jit, u32 stack_depth) } /* + * Emit an expoline for a jump that follows + */ +static void emit_expoline(struct bpf_jit *jit) +{ + /* exrl %r0,.+10 */ + EMIT6_PCREL_RIL(0xc6000000, jit->prg + 10); + /* j . */ + EMIT4_PCREL(0xa7f40000, 0); +} + +/* + * Emit __s390_indirect_jump_r1 thunk if necessary + */ +static void emit_r1_thunk(struct bpf_jit *jit) +{ + if (nospec_uses_trampoline()) { + jit->r1_thunk_ip = jit->prg; + emit_expoline(jit); + /* br %r1 */ + _EMIT2(0x07f1); + } +} + +/* + * Call r1 either directly or via __s390_indirect_jump_r1 thunk + */ +static void call_r1(struct bpf_jit *jit) +{ + if (nospec_uses_trampoline()) + /* brasl %r14,__s390_indirect_jump_r1 */ + EMIT6_PCREL_RILB(0xc0050000, REG_14, jit->r1_thunk_ip); + else + /* basr %r14,%r1 */ + EMIT2(0x0d00, REG_14, REG_1); +} + +/* * Function epilogue */ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth) @@ -570,25 +651,20 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth) if (nospec_uses_trampoline()) { jit->r14_thunk_ip = jit->prg; /* Generate __s390_indirect_jump_r14 thunk */ - /* exrl %r0,.+10 */ - EMIT6_PCREL_RIL(0xc6000000, jit->prg + 10); - /* j . */ - EMIT4_PCREL(0xa7f40000, 0); + emit_expoline(jit); } /* br %r14 */ _EMIT2(0x07fe); - if ((nospec_uses_trampoline()) && - (is_first_pass(jit) || (jit->seen & SEEN_FUNC))) { - jit->r1_thunk_ip = jit->prg; - /* Generate __s390_indirect_jump_r1 thunk */ - /* exrl %r0,.+10 */ - EMIT6_PCREL_RIL(0xc6000000, jit->prg + 10); - /* j . */ - EMIT4_PCREL(0xa7f40000, 0); - /* br %r1 */ - _EMIT2(0x07f1); - } + if (is_first_pass(jit) || (jit->seen & SEEN_FUNC)) + emit_r1_thunk(jit); + + jit->prg = ALIGN(jit->prg, 8); + jit->prologue_plt = jit->prg; + if (jit->prg_buf) + bpf_jit_plt(jit->prg_buf + jit->prg, + jit->prg_buf + jit->prologue_plt_ret, NULL); + jit->prg += BPF_PLT_SIZE; } static int get_probe_mem_regno(const u8 *insn) @@ -663,6 +739,34 @@ static int bpf_jit_probe_mem(struct bpf_jit *jit, struct bpf_prog *fp, } /* + * Sign-extend the register if necessary + */ +static int sign_extend(struct bpf_jit *jit, int r, u8 size, u8 flags) +{ + if (!(flags & BTF_FMODEL_SIGNED_ARG)) + return 0; + + switch (size) { + case 1: + /* lgbr %r,%r */ + EMIT4(0xb9060000, r, r); + return 0; + case 2: + /* lghr %r,%r */ + EMIT4(0xb9070000, r, r); + return 0; + case 4: + /* lgfr %r,%r */ + EMIT4(0xb9140000, r, r); + return 0; + case 8: + return 0; + default: + return -1; + } +} + +/* * Compile one eBPF instruction into s390x code * * NOTE: Use noinline because for gcov (-fprofile-arcs) gcc allocates a lot of @@ -1297,9 +1401,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, */ case BPF_JMP | BPF_CALL: { - u64 func; + const struct btf_func_model *m; bool func_addr_fixed; - int ret; + int j, ret; + u64 func; ret = bpf_jit_get_func_addr(fp, insn, extra_pass, &func, &func_addr_fixed); @@ -1308,15 +1413,38 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, REG_SET_SEEN(BPF_REG_5); jit->seen |= SEEN_FUNC; + /* + * Copy the tail call counter to where the callee expects it. + * + * Note 1: The callee can increment the tail call counter, but + * we do not load it back, since the x86 JIT does not do this + * either. + * + * Note 2: We assume that the verifier does not let us call the + * main program, which clears the tail call counter on entry. + */ + /* mvc STK_OFF_TCCNT(4,%r15),N(%r15) */ + _EMIT6(0xd203f000 | STK_OFF_TCCNT, + 0xf000 | (STK_OFF_TCCNT + STK_OFF + stack_depth)); + + /* Sign-extend the kfunc arguments. */ + if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) { + m = bpf_jit_find_kfunc_model(fp, insn); + if (!m) + return -1; + + for (j = 0; j < m->nr_args; j++) { + if (sign_extend(jit, BPF_REG_1 + j, + m->arg_size[j], + m->arg_flags[j])) + return -1; + } + } + /* lgrl %w1,func */ EMIT6_PCREL_RILB(0xc4080000, REG_W1, _EMIT_CONST_U64(func)); - if (nospec_uses_trampoline()) { - /* brasl %r14,__s390_indirect_jump_r1 */ - EMIT6_PCREL_RILB(0xc0050000, REG_14, jit->r1_thunk_ip); - } else { - /* basr %r14,%w1 */ - EMIT2(0x0d00, REG_14, REG_W1); - } + /* %r1() */ + call_r1(jit); /* lgr %b0,%r2: load return value into %b0 */ EMIT4(0xb9040000, BPF_REG_0, REG_2); break; @@ -1329,10 +1457,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, * B1: pointer to ctx * B2: pointer to bpf_array * B3: index in bpf_array - */ - jit->seen |= SEEN_TAIL_CALL; - - /* + * * if (index >= array->map.max_entries) * goto out; */ @@ -1393,8 +1518,16 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, /* lg %r1,bpf_func(%r1) */ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, REG_1, REG_0, offsetof(struct bpf_prog, bpf_func)); - /* bc 0xf,tail_call_start(%r1) */ - _EMIT4(0x47f01000 + jit->tail_call_start); + if (nospec_uses_trampoline()) { + jit->seen |= SEEN_FUNC; + /* aghi %r1,tail_call_start */ + EMIT4_IMM(0xa70b0000, REG_1, jit->tail_call_start); + /* brcl 0xf,__s390_indirect_jump_r1 */ + EMIT6_PCREL_RILC(0xc0040000, 0xf, jit->r1_thunk_ip); + } else { + /* bc 0xf,tail_call_start(%r1) */ + _EMIT4(0x47f01000 + jit->tail_call_start); + } /* out: */ if (jit->prg_buf) { *(u16 *)(jit->prg_buf + patch_1_clrj + 2) = @@ -1688,7 +1821,7 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp, jit->prg = 0; jit->excnt = 0; - bpf_jit_prologue(jit, stack_depth); + bpf_jit_prologue(jit, fp, stack_depth); if (bpf_set_addr(jit, 0) < 0) return -1; for (i = 0; i < fp->len; i += insn_count) { @@ -1768,6 +1901,9 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp) struct bpf_jit jit; int pass; + if (WARN_ON_ONCE(bpf_plt_end - bpf_plt != BPF_PLT_SIZE)) + return orig_fp; + if (!fp->jit_requested) return orig_fp; @@ -1859,3 +1995,508 @@ out: tmp : orig_fp); return fp; } + +bool bpf_jit_supports_kfunc_call(void) +{ + return true; +} + +int bpf_arch_text_poke(void *ip, enum bpf_text_poke_type t, + void *old_addr, void *new_addr) +{ + struct { + u16 opc; + s32 disp; + } __packed insn; + char expected_plt[BPF_PLT_SIZE]; + char current_plt[BPF_PLT_SIZE]; + char *plt; + int err; + + /* Verify the branch to be patched. */ + err = copy_from_kernel_nofault(&insn, ip, sizeof(insn)); + if (err < 0) + return err; + if (insn.opc != (0xc004 | (old_addr ? 0xf0 : 0))) + return -EINVAL; + + if (t == BPF_MOD_JUMP && + insn.disp == ((char *)new_addr - (char *)ip) >> 1) { + /* + * The branch already points to the destination, + * there is no PLT. + */ + } else { + /* Verify the PLT. */ + plt = (char *)ip + (insn.disp << 1); + err = copy_from_kernel_nofault(current_plt, plt, BPF_PLT_SIZE); + if (err < 0) + return err; + bpf_jit_plt(expected_plt, (char *)ip + 6, old_addr); + if (memcmp(current_plt, expected_plt, BPF_PLT_SIZE)) + return -EINVAL; + /* Adjust the call address. */ + s390_kernel_write(plt + (bpf_plt_target - bpf_plt), + &new_addr, sizeof(void *)); + } + + /* Adjust the mask of the branch. */ + insn.opc = 0xc004 | (new_addr ? 0xf0 : 0); + s390_kernel_write((char *)ip + 1, (char *)&insn.opc + 1, 1); + + /* Make the new code visible to the other CPUs. */ + text_poke_sync_lock(); + + return 0; +} + +struct bpf_tramp_jit { + struct bpf_jit common; + int orig_stack_args_off;/* Offset of arguments placed on stack by the + * func_addr's original caller + */ + int stack_size; /* Trampoline stack size */ + int stack_args_off; /* Offset of stack arguments for calling + * func_addr, has to be at the top + */ + int reg_args_off; /* Offset of register arguments for calling + * func_addr + */ + int ip_off; /* For bpf_get_func_ip(), has to be at + * (ctx - 16) + */ + int arg_cnt_off; /* For bpf_get_func_arg_cnt(), has to be at + * (ctx - 8) + */ + int bpf_args_off; /* Offset of BPF_PROG context, which consists + * of BPF arguments followed by return value + */ + int retval_off; /* Offset of return value (see above) */ + int r7_r8_off; /* Offset of saved %r7 and %r8, which are used + * for __bpf_prog_enter() return value and + * func_addr respectively + */ + int r14_off; /* Offset of saved %r14 */ + int run_ctx_off; /* Offset of struct bpf_tramp_run_ctx */ + int do_fexit; /* do_fexit: label */ +}; + +static void load_imm64(struct bpf_jit *jit, int dst_reg, u64 val) +{ + /* llihf %dst_reg,val_hi */ + EMIT6_IMM(0xc00e0000, dst_reg, (val >> 32)); + /* oilf %rdst_reg,val_lo */ + EMIT6_IMM(0xc00d0000, dst_reg, val); +} + +static int invoke_bpf_prog(struct bpf_tramp_jit *tjit, + const struct btf_func_model *m, + struct bpf_tramp_link *tlink, bool save_ret) +{ + struct bpf_jit *jit = &tjit->common; + int cookie_off = tjit->run_ctx_off + + offsetof(struct bpf_tramp_run_ctx, bpf_cookie); + struct bpf_prog *p = tlink->link.prog; + int patch; + + /* + * run_ctx.cookie = tlink->cookie; + */ + + /* %r0 = tlink->cookie */ + load_imm64(jit, REG_W0, tlink->cookie); + /* stg %r0,cookie_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W0, REG_0, REG_15, cookie_off); + + /* + * if ((start = __bpf_prog_enter(p, &run_ctx)) == 0) + * goto skip; + */ + + /* %r1 = __bpf_prog_enter */ + load_imm64(jit, REG_1, (u64)bpf_trampoline_enter(p)); + /* %r2 = p */ + load_imm64(jit, REG_2, (u64)p); + /* la %r3,run_ctx_off(%r15) */ + EMIT4_DISP(0x41000000, REG_3, REG_15, tjit->run_ctx_off); + /* %r1() */ + call_r1(jit); + /* ltgr %r7,%r2 */ + EMIT4(0xb9020000, REG_7, REG_2); + /* brcl 8,skip */ + patch = jit->prg; + EMIT6_PCREL_RILC(0xc0040000, 8, 0); + + /* + * retval = bpf_func(args, p->insnsi); + */ + + /* %r1 = p->bpf_func */ + load_imm64(jit, REG_1, (u64)p->bpf_func); + /* la %r2,bpf_args_off(%r15) */ + EMIT4_DISP(0x41000000, REG_2, REG_15, tjit->bpf_args_off); + /* %r3 = p->insnsi */ + if (!p->jited) + load_imm64(jit, REG_3, (u64)p->insnsi); + /* %r1() */ + call_r1(jit); + /* stg %r2,retval_off(%r15) */ + if (save_ret) { + if (sign_extend(jit, REG_2, m->ret_size, m->ret_flags)) + return -1; + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_2, REG_0, REG_15, + tjit->retval_off); + } + + /* skip: */ + if (jit->prg_buf) + *(u32 *)&jit->prg_buf[patch + 2] = (jit->prg - patch) >> 1; + + /* + * __bpf_prog_exit(p, start, &run_ctx); + */ + + /* %r1 = __bpf_prog_exit */ + load_imm64(jit, REG_1, (u64)bpf_trampoline_exit(p)); + /* %r2 = p */ + load_imm64(jit, REG_2, (u64)p); + /* lgr %r3,%r7 */ + EMIT4(0xb9040000, REG_3, REG_7); + /* la %r4,run_ctx_off(%r15) */ + EMIT4_DISP(0x41000000, REG_4, REG_15, tjit->run_ctx_off); + /* %r1() */ + call_r1(jit); + + return 0; +} + +static int alloc_stack(struct bpf_tramp_jit *tjit, size_t size) +{ + int stack_offset = tjit->stack_size; + + tjit->stack_size += size; + return stack_offset; +} + +/* ABI uses %r2 - %r6 for parameter passing. */ +#define MAX_NR_REG_ARGS 5 + +/* The "L" field of the "mvc" instruction is 8 bits. */ +#define MAX_MVC_SIZE 256 +#define MAX_NR_STACK_ARGS (MAX_MVC_SIZE / sizeof(u64)) + +/* -mfentry generates a 6-byte nop on s390x. */ +#define S390X_PATCH_SIZE 6 + +static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, + struct bpf_tramp_jit *tjit, + const struct btf_func_model *m, + u32 flags, + struct bpf_tramp_links *tlinks, + void *func_addr) +{ + struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN]; + struct bpf_tramp_links *fentry = &tlinks[BPF_TRAMP_FENTRY]; + struct bpf_tramp_links *fexit = &tlinks[BPF_TRAMP_FEXIT]; + int nr_bpf_args, nr_reg_args, nr_stack_args; + struct bpf_jit *jit = &tjit->common; + int arg, bpf_arg_off; + int i, j; + + /* Support as many stack arguments as "mvc" instruction can handle. */ + nr_reg_args = min_t(int, m->nr_args, MAX_NR_REG_ARGS); + nr_stack_args = m->nr_args - nr_reg_args; + if (nr_stack_args > MAX_NR_STACK_ARGS) + return -ENOTSUPP; + + /* Return to %r14, since func_addr and %r0 are not available. */ + if (!func_addr && !(flags & BPF_TRAMP_F_ORIG_STACK)) + flags |= BPF_TRAMP_F_SKIP_FRAME; + + /* + * Compute how many arguments we need to pass to BPF programs. + * BPF ABI mirrors that of x86_64: arguments that are 16 bytes or + * smaller are packed into 1 or 2 registers; larger arguments are + * passed via pointers. + * In s390x ABI, arguments that are 8 bytes or smaller are packed into + * a register; larger arguments are passed via pointers. + * We need to deal with this difference. + */ + nr_bpf_args = 0; + for (i = 0; i < m->nr_args; i++) { + if (m->arg_size[i] <= 8) + nr_bpf_args += 1; + else if (m->arg_size[i] <= 16) + nr_bpf_args += 2; + else + return -ENOTSUPP; + } + + /* + * Calculate the stack layout. + */ + + /* Reserve STACK_FRAME_OVERHEAD bytes for the callees. */ + tjit->stack_size = STACK_FRAME_OVERHEAD; + tjit->stack_args_off = alloc_stack(tjit, nr_stack_args * sizeof(u64)); + tjit->reg_args_off = alloc_stack(tjit, nr_reg_args * sizeof(u64)); + tjit->ip_off = alloc_stack(tjit, sizeof(u64)); + tjit->arg_cnt_off = alloc_stack(tjit, sizeof(u64)); + tjit->bpf_args_off = alloc_stack(tjit, nr_bpf_args * sizeof(u64)); + tjit->retval_off = alloc_stack(tjit, sizeof(u64)); + tjit->r7_r8_off = alloc_stack(tjit, 2 * sizeof(u64)); + tjit->r14_off = alloc_stack(tjit, sizeof(u64)); + tjit->run_ctx_off = alloc_stack(tjit, + sizeof(struct bpf_tramp_run_ctx)); + /* The caller has already reserved STACK_FRAME_OVERHEAD bytes. */ + tjit->stack_size -= STACK_FRAME_OVERHEAD; + tjit->orig_stack_args_off = tjit->stack_size + STACK_FRAME_OVERHEAD; + + /* aghi %r15,-stack_size */ + EMIT4_IMM(0xa70b0000, REG_15, -tjit->stack_size); + /* stmg %r2,%rN,fwd_reg_args_off(%r15) */ + if (nr_reg_args) + EMIT6_DISP_LH(0xeb000000, 0x0024, REG_2, + REG_2 + (nr_reg_args - 1), REG_15, + tjit->reg_args_off); + for (i = 0, j = 0; i < m->nr_args; i++) { + if (i < MAX_NR_REG_ARGS) + arg = REG_2 + i; + else + arg = tjit->orig_stack_args_off + + (i - MAX_NR_REG_ARGS) * sizeof(u64); + bpf_arg_off = tjit->bpf_args_off + j * sizeof(u64); + if (m->arg_size[i] <= 8) { + if (i < MAX_NR_REG_ARGS) + /* stg %arg,bpf_arg_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, arg, + REG_0, REG_15, bpf_arg_off); + else + /* mvc bpf_arg_off(8,%r15),arg(%r15) */ + _EMIT6(0xd207f000 | bpf_arg_off, + 0xf000 | arg); + j += 1; + } else { + if (i < MAX_NR_REG_ARGS) { + /* mvc bpf_arg_off(16,%r15),0(%arg) */ + _EMIT6(0xd20ff000 | bpf_arg_off, + reg2hex[arg] << 12); + } else { + /* lg %r1,arg(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0004, REG_1, REG_0, + REG_15, arg); + /* mvc bpf_arg_off(16,%r15),0(%r1) */ + _EMIT6(0xd20ff000 | bpf_arg_off, 0x1000); + } + j += 2; + } + } + /* stmg %r7,%r8,r7_r8_off(%r15) */ + EMIT6_DISP_LH(0xeb000000, 0x0024, REG_7, REG_8, REG_15, + tjit->r7_r8_off); + /* stg %r14,r14_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_14, REG_0, REG_15, tjit->r14_off); + + if (flags & BPF_TRAMP_F_ORIG_STACK) { + /* + * The ftrace trampoline puts the return address (which is the + * address of the original function + S390X_PATCH_SIZE) into + * %r0; see ftrace_shared_hotpatch_trampoline_br and + * ftrace_init_nop() for details. + */ + + /* lgr %r8,%r0 */ + EMIT4(0xb9040000, REG_8, REG_0); + } else { + /* %r8 = func_addr + S390X_PATCH_SIZE */ + load_imm64(jit, REG_8, (u64)func_addr + S390X_PATCH_SIZE); + } + + /* + * ip = func_addr; + * arg_cnt = m->nr_args; + */ + + if (flags & BPF_TRAMP_F_IP_ARG) { + /* %r0 = func_addr */ + load_imm64(jit, REG_0, (u64)func_addr); + /* stg %r0,ip_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_0, REG_0, REG_15, + tjit->ip_off); + } + /* lghi %r0,nr_bpf_args */ + EMIT4_IMM(0xa7090000, REG_0, nr_bpf_args); + /* stg %r0,arg_cnt_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_0, REG_0, REG_15, + tjit->arg_cnt_off); + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + /* + * __bpf_tramp_enter(im); + */ + + /* %r1 = __bpf_tramp_enter */ + load_imm64(jit, REG_1, (u64)__bpf_tramp_enter); + /* %r2 = im */ + load_imm64(jit, REG_2, (u64)im); + /* %r1() */ + call_r1(jit); + } + + for (i = 0; i < fentry->nr_links; i++) + if (invoke_bpf_prog(tjit, m, fentry->links[i], + flags & BPF_TRAMP_F_RET_FENTRY_RET)) + return -EINVAL; + + if (fmod_ret->nr_links) { + /* + * retval = 0; + */ + + /* xc retval_off(8,%r15),retval_off(%r15) */ + _EMIT6(0xd707f000 | tjit->retval_off, + 0xf000 | tjit->retval_off); + + for (i = 0; i < fmod_ret->nr_links; i++) { + if (invoke_bpf_prog(tjit, m, fmod_ret->links[i], true)) + return -EINVAL; + + /* + * if (retval) + * goto do_fexit; + */ + + /* ltg %r0,retval_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0002, REG_0, REG_0, REG_15, + tjit->retval_off); + /* brcl 7,do_fexit */ + EMIT6_PCREL_RILC(0xc0040000, 7, tjit->do_fexit); + } + } + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + /* + * retval = func_addr(args); + */ + + /* lmg %r2,%rN,reg_args_off(%r15) */ + if (nr_reg_args) + EMIT6_DISP_LH(0xeb000000, 0x0004, REG_2, + REG_2 + (nr_reg_args - 1), REG_15, + tjit->reg_args_off); + /* mvc stack_args_off(N,%r15),orig_stack_args_off(%r15) */ + if (nr_stack_args) + _EMIT6(0xd200f000 | + (nr_stack_args * sizeof(u64) - 1) << 16 | + tjit->stack_args_off, + 0xf000 | tjit->orig_stack_args_off); + /* lgr %r1,%r8 */ + EMIT4(0xb9040000, REG_1, REG_8); + /* %r1() */ + call_r1(jit); + /* stg %r2,retval_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0024, REG_2, REG_0, REG_15, + tjit->retval_off); + + im->ip_after_call = jit->prg_buf + jit->prg; + + /* + * The following nop will be patched by bpf_tramp_image_put(). + */ + + /* brcl 0,im->ip_epilogue */ + EMIT6_PCREL_RILC(0xc0040000, 0, (u64)im->ip_epilogue); + } + + /* do_fexit: */ + tjit->do_fexit = jit->prg; + for (i = 0; i < fexit->nr_links; i++) + if (invoke_bpf_prog(tjit, m, fexit->links[i], false)) + return -EINVAL; + + if (flags & BPF_TRAMP_F_CALL_ORIG) { + im->ip_epilogue = jit->prg_buf + jit->prg; + + /* + * __bpf_tramp_exit(im); + */ + + /* %r1 = __bpf_tramp_exit */ + load_imm64(jit, REG_1, (u64)__bpf_tramp_exit); + /* %r2 = im */ + load_imm64(jit, REG_2, (u64)im); + /* %r1() */ + call_r1(jit); + } + + /* lmg %r2,%rN,reg_args_off(%r15) */ + if ((flags & BPF_TRAMP_F_RESTORE_REGS) && nr_reg_args) + EMIT6_DISP_LH(0xeb000000, 0x0004, REG_2, + REG_2 + (nr_reg_args - 1), REG_15, + tjit->reg_args_off); + /* lgr %r1,%r8 */ + if (!(flags & BPF_TRAMP_F_SKIP_FRAME)) + EMIT4(0xb9040000, REG_1, REG_8); + /* lmg %r7,%r8,r7_r8_off(%r15) */ + EMIT6_DISP_LH(0xeb000000, 0x0004, REG_7, REG_8, REG_15, + tjit->r7_r8_off); + /* lg %r14,r14_off(%r15) */ + EMIT6_DISP_LH(0xe3000000, 0x0004, REG_14, REG_0, REG_15, tjit->r14_off); + /* lg %r2,retval_off(%r15) */ + if (flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET)) + EMIT6_DISP_LH(0xe3000000, 0x0004, REG_2, REG_0, REG_15, + tjit->retval_off); + /* aghi %r15,stack_size */ + EMIT4_IMM(0xa70b0000, REG_15, tjit->stack_size); + /* Emit an expoline for the following indirect jump. */ + if (nospec_uses_trampoline()) + emit_expoline(jit); + if (flags & BPF_TRAMP_F_SKIP_FRAME) + /* br %r14 */ + _EMIT2(0x07fe); + else + /* br %r1 */ + _EMIT2(0x07f1); + + emit_r1_thunk(jit); + + return 0; +} + +int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, + void *image_end, const struct btf_func_model *m, + u32 flags, struct bpf_tramp_links *tlinks, + void *func_addr) +{ + struct bpf_tramp_jit tjit; + int ret; + int i; + + for (i = 0; i < 2; i++) { + if (i == 0) { + /* Compute offsets, check whether the code fits. */ + memset(&tjit, 0, sizeof(tjit)); + } else { + /* Generate the code. */ + tjit.common.prg = 0; + tjit.common.prg_buf = image; + } + ret = __arch_prepare_bpf_trampoline(im, &tjit, m, flags, + tlinks, func_addr); + if (ret < 0) + return ret; + if (tjit.common.prg > (char *)image_end - (char *)image) + /* + * Use the same error code as for exceeding + * BPF_MAX_TRAMP_LINKS. + */ + return -E2BIG; + } + + return ret; +} + +bool bpf_jit_supports_subprog_tailcalls(void) +{ + return true; +} diff --git a/arch/sh/kernel/vmlinux.lds.S b/arch/sh/kernel/vmlinux.lds.S index 3161b9ccd2a5..b6276a3521d7 100644 --- a/arch/sh/kernel/vmlinux.lds.S +++ b/arch/sh/kernel/vmlinux.lds.S @@ -4,6 +4,7 @@ * Written by Niibe Yutaka and Paul Mundt */ OUTPUT_ARCH(sh) +#define RUNTIME_DISCARD_EXIT #include <asm/thread_info.h> #include <asm/cache.h> #include <asm/vmlinux.lds.h> diff --git a/arch/x86/include/asm/debugreg.h b/arch/x86/include/asm/debugreg.h index b049d950612f..ca97442e8d49 100644 --- a/arch/x86/include/asm/debugreg.h +++ b/arch/x86/include/asm/debugreg.h @@ -39,7 +39,20 @@ static __always_inline unsigned long native_get_debugreg(int regno) asm("mov %%db6, %0" :"=r" (val)); break; case 7: - asm("mov %%db7, %0" :"=r" (val)); + /* + * Apply __FORCE_ORDER to DR7 reads to forbid re-ordering them + * with other code. + * + * This is needed because a DR7 access can cause a #VC exception + * when running under SEV-ES. Taking a #VC exception is not a + * safe thing to do just anywhere in the entry code and + * re-ordering might place the access into an unsafe location. + * + * This happened in the NMI handler, where the DR7 read was + * re-ordered to happen before the call to sev_es_ist_enter(), + * causing stack recursion. + */ + asm volatile("mov %%db7, %0" : "=r" (val) : __FORCE_ORDER); break; default: BUG(); @@ -66,7 +79,16 @@ static __always_inline void native_set_debugreg(int regno, unsigned long value) asm("mov %0, %%db6" ::"r" (value)); break; case 7: - asm("mov %0, %%db7" ::"r" (value)); + /* + * Apply __FORCE_ORDER to DR7 writes to forbid re-ordering them + * with other code. + * + * While is didn't happen with a DR7 write (see the DR7 read + * comment above which explains where it happened), add the + * __FORCE_ORDER here too to avoid similar problems in the + * future. + */ + asm volatile("mov %0, %%db7" ::"r" (value), __FORCE_ORDER); break; default: BUG(); diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c index 7d9b15f0dbd5..0fbde0fc0628 100644 --- a/block/bfq-cgroup.c +++ b/block/bfq-cgroup.c @@ -769,8 +769,8 @@ static void __bfq_bic_change_cgroup(struct bfq_data *bfqd, * request from the old cgroup. */ bfq_put_cooperator(sync_bfqq); - bfq_release_process_ref(bfqd, sync_bfqq); bic_set_bfqq(bic, NULL, true); + bfq_release_process_ref(bfqd, sync_bfqq); } } } diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index ccf2204477a5..380e9bda2e57 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -5425,9 +5425,11 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio) bfqq = bic_to_bfqq(bic, false); if (bfqq) { - bfq_release_process_ref(bfqd, bfqq); + struct bfq_queue *old_bfqq = bfqq; + bfqq = bfq_get_queue(bfqd, bio, false, bic, true); bic_set_bfqq(bic, bfqq, false); + bfq_release_process_ref(bfqd, old_bfqq); } bfqq = bic_to_bfqq(bic, true); diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c index 4c94a6560f62..9ac1efb053e0 100644 --- a/block/blk-cgroup.c +++ b/block/blk-cgroup.c @@ -2001,6 +2001,10 @@ void blk_cgroup_bio_start(struct bio *bio) struct blkg_iostat_set *bis; unsigned long flags; + /* Root-level stats are sourced from system-wide IO stats */ + if (!cgroup_parent(blkcg->css.cgroup)) + return; + cpu = get_cpu(); bis = per_cpu_ptr(bio->bi_blkg->iostat_cpu, cpu); flags = u64_stats_update_begin_irqsave(&bis->sync); diff --git a/block/blk-mq.c b/block/blk-mq.c index 9d463f7563bc..9c8dc70020bc 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -4069,8 +4069,9 @@ EXPORT_SYMBOL(blk_mq_init_queue); * blk_mq_destroy_queue - shutdown a request queue * @q: request queue to shutdown * - * This shuts down a request queue allocated by blk_mq_init_queue() and drops - * the initial reference. All future requests will failed with -ENODEV. + * This shuts down a request queue allocated by blk_mq_init_queue(). All future + * requests will be failed with -ENODEV. The caller is responsible for dropping + * the reference from blk_mq_init_queue() by calling blk_put_queue(). * * Context: can sleep */ diff --git a/certs/Makefile b/certs/Makefile index 9486ed924731..799ad7b9e68a 100644 --- a/certs/Makefile +++ b/certs/Makefile @@ -23,8 +23,8 @@ $(obj)/blacklist_hash_list: $(CONFIG_SYSTEM_BLACKLIST_HASH_LIST) FORCE targets += blacklist_hash_list quiet_cmd_extract_certs = CERT $@ - cmd_extract_certs = $(obj)/extract-cert $(extract-cert-in) $@ -extract-cert-in = $(or $(filter-out $(obj)/extract-cert, $(real-prereqs)),"") + cmd_extract_certs = $(obj)/extract-cert "$(extract-cert-in)" $@ +extract-cert-in = $(filter-out $(obj)/extract-cert, $(real-prereqs)) $(obj)/system_certificates.o: $(obj)/x509_certificate_list diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c index 884ae73b11ea..2ea572628b1c 100644 --- a/drivers/ata/libata-core.c +++ b/drivers/ata/libata-core.c @@ -3109,7 +3109,7 @@ int sata_down_spd_limit(struct ata_link *link, u32 spd_limit) */ if (spd > 1) mask &= (1 << (spd - 1)) - 1; - else + else if (link->sata_spd) return -EINVAL; /* were we already at the bottom? */ diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index e54693204630..6368b56eacf1 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -137,7 +137,7 @@ struct ublk_device { char *__queues; - unsigned short queue_size; + unsigned int queue_size; struct ublksrv_ctrl_dev_info dev_info; struct blk_mq_tag_set tag_set; diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c index d4e2cb9a4eb4..bede8b005594 100644 --- a/drivers/bluetooth/btintel.c +++ b/drivers/bluetooth/btintel.c @@ -9,6 +9,7 @@ #include <linux/module.h> #include <linux/firmware.h> #include <linux/regmap.h> +#include <linux/acpi.h> #include <asm/unaligned.h> #include <net/bluetooth/bluetooth.h> @@ -24,6 +25,9 @@ #define ECDSA_OFFSET 644 #define ECDSA_HEADER_LEN 320 +#define BTINTEL_PPAG_NAME "PPAG" +#define BTINTEL_PPAG_PREFIX "\\_SB_.PCI0.XHCI.RHUB" + #define CMD_WRITE_BOOT_PARAMS 0xfc0e struct cmd_write_boot_params { __le32 boot_addr; @@ -1278,6 +1282,63 @@ static int btintel_read_debug_features(struct hci_dev *hdev, return 0; } +static acpi_status btintel_ppag_callback(acpi_handle handle, u32 lvl, void *data, + void **ret) +{ + acpi_status status; + size_t len; + struct btintel_ppag *ppag = data; + union acpi_object *p, *elements; + struct acpi_buffer string = {ACPI_ALLOCATE_BUFFER, NULL}; + struct acpi_buffer buffer = {ACPI_ALLOCATE_BUFFER, NULL}; + struct hci_dev *hdev = ppag->hdev; + + status = acpi_get_name(handle, ACPI_FULL_PATHNAME, &string); + if (ACPI_FAILURE(status)) { + bt_dev_warn(hdev, "ACPI Failure: %s", acpi_format_exception(status)); + return status; + } + + if (strncmp(BTINTEL_PPAG_PREFIX, string.pointer, + strlen(BTINTEL_PPAG_PREFIX))) { + kfree(string.pointer); + return AE_OK; + } + + len = strlen(string.pointer); + if (strncmp((char *)string.pointer + len - 4, BTINTEL_PPAG_NAME, 4)) { + kfree(string.pointer); + return AE_OK; + } + kfree(string.pointer); + + status = acpi_evaluate_object(handle, NULL, NULL, &buffer); + if (ACPI_FAILURE(status)) { + bt_dev_warn(hdev, "ACPI Failure: %s", acpi_format_exception(status)); + return status; + } + + p = buffer.pointer; + ppag = (struct btintel_ppag *)data; + + if (p->type != ACPI_TYPE_PACKAGE || p->package.count != 2) { + kfree(buffer.pointer); + bt_dev_warn(hdev, "Invalid object type: %d or package count: %d", + p->type, p->package.count); + return AE_ERROR; + } + + elements = p->package.elements; + + /* PPAG table is located at element[1] */ + p = &elements[1]; + + ppag->domain = (u32)p->package.elements[0].integer.value; + ppag->mode = (u32)p->package.elements[1].integer.value; + kfree(buffer.pointer); + return AE_CTRL_TERMINATE; +} + static int btintel_set_debug_features(struct hci_dev *hdev, const struct intel_debug_features *features) { @@ -2251,6 +2312,58 @@ error: return err; } +static void btintel_set_ppag(struct hci_dev *hdev, struct intel_version_tlv *ver) +{ + acpi_status status; + struct btintel_ppag ppag; + struct sk_buff *skb; + struct btintel_loc_aware_reg ppag_cmd; + + /* PPAG is not supported if CRF is HrP2, Jfp2, JfP1 */ + switch (ver->cnvr_top & 0xFFF) { + case 0x504: /* Hrp2 */ + case 0x202: /* Jfp2 */ + case 0x201: /* Jfp1 */ + return; + } + + memset(&ppag, 0, sizeof(ppag)); + + ppag.hdev = hdev; + status = acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT, + ACPI_UINT32_MAX, NULL, + btintel_ppag_callback, &ppag, NULL); + + if (ACPI_FAILURE(status)) { + /* Do not log warning message if ACPI entry is not found */ + if (status == AE_NOT_FOUND) + return; + bt_dev_warn(hdev, "PPAG: ACPI Failure: %s", acpi_format_exception(status)); + return; + } + + if (ppag.domain != 0x12) { + bt_dev_warn(hdev, "PPAG-BT Domain disabled"); + return; + } + + /* PPAG mode, BIT0 = 0 Disabled, BIT0 = 1 Enabled */ + if (!(ppag.mode & BIT(0))) { + bt_dev_dbg(hdev, "PPAG disabled"); + return; + } + + ppag_cmd.mcc = cpu_to_le32(0); + ppag_cmd.sel = cpu_to_le32(0); /* 0 - Enable , 1 - Disable, 2 - Testing mode */ + ppag_cmd.delta = cpu_to_le32(0); + skb = __hci_cmd_sync(hdev, 0xfe19, sizeof(ppag_cmd), &ppag_cmd, HCI_CMD_TIMEOUT); + if (IS_ERR(skb)) { + bt_dev_warn(hdev, "Failed to send PPAG Enable (%ld)", PTR_ERR(skb)); + return; + } + kfree_skb(skb); +} + static int btintel_bootloader_setup_tlv(struct hci_dev *hdev, struct intel_version_tlv *ver) { @@ -2297,6 +2410,9 @@ static int btintel_bootloader_setup_tlv(struct hci_dev *hdev, hci_dev_clear_flag(hdev, HCI_QUALITY_REPORT); + /* Set PPAG feature */ + btintel_set_ppag(hdev, ver); + /* Read the Intel version information after loading the FW */ err = btintel_read_version_tlv(hdev, &new_ver); if (err) diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h index e0060e58573c..8e7da877efae 100644 --- a/drivers/bluetooth/btintel.h +++ b/drivers/bluetooth/btintel.h @@ -137,6 +137,19 @@ struct intel_offload_use_cases { __u8 preset[8]; } __packed; +/* structure to store the PPAG data read from ACPI table */ +struct btintel_ppag { + u32 domain; + u32 mode; + struct hci_dev *hdev; +}; + +struct btintel_loc_aware_reg { + __le32 mcc; + __le32 sel; + __le32 delta; +} __packed; + #define INTEL_HW_PLATFORM(cnvx_bt) ((u8)(((cnvx_bt) & 0x0000ff00) >> 8)) #define INTEL_HW_VARIANT(cnvx_bt) ((u8)(((cnvx_bt) & 0x003f0000) >> 16)) #define INTEL_CNVX_TOP_TYPE(cnvx_top) ((cnvx_top) & 0x00000fff) diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c index 2ad4efdd9e40..18bc94718711 100644 --- a/drivers/bluetooth/btusb.c +++ b/drivers/bluetooth/btusb.c @@ -64,6 +64,7 @@ static struct usb_driver btusb_driver; #define BTUSB_INTEL_BROKEN_SHUTDOWN_LED BIT(24) #define BTUSB_INTEL_BROKEN_INITIAL_NCMD BIT(25) #define BTUSB_INTEL_NO_WBS_SUPPORT BIT(26) +#define BTUSB_ACTIONS_SEMI BIT(27) static const struct usb_device_id btusb_table[] = { /* Generic Bluetooth USB device */ @@ -492,6 +493,10 @@ static const struct usb_device_id blacklist_table[] = { { USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01), .driver_info = BTUSB_IGNORE }, + /* Realtek 8821CE Bluetooth devices */ + { USB_DEVICE(0x13d3, 0x3529), .driver_info = BTUSB_REALTEK | + BTUSB_WIDEBAND_SPEECH }, + /* Realtek 8822CE Bluetooth devices */ { USB_DEVICE(0x0bda, 0xb00c), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, @@ -566,6 +571,9 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0489, 0xe0e0), .driver_info = BTUSB_MEDIATEK | BTUSB_WIDEBAND_SPEECH | BTUSB_VALID_LE_STATES }, + { USB_DEVICE(0x0489, 0xe0f2), .driver_info = BTUSB_MEDIATEK | + BTUSB_WIDEBAND_SPEECH | + BTUSB_VALID_LE_STATES }, { USB_DEVICE(0x04ca, 0x3802), .driver_info = BTUSB_MEDIATEK | BTUSB_WIDEBAND_SPEECH | BTUSB_VALID_LE_STATES }, @@ -677,6 +685,9 @@ static const struct usb_device_id blacklist_table[] = { { USB_DEVICE(0x0cb5, 0xc547), .driver_info = BTUSB_REALTEK | BTUSB_WIDEBAND_SPEECH }, + /* Actions Semiconductor ATS2851 based devices */ + { USB_DEVICE(0x10d7, 0xb012), .driver_info = BTUSB_ACTIONS_SEMI }, + /* Silicon Wave based devices */ { USB_DEVICE(0x0c10, 0x0000), .driver_info = BTUSB_SWAVE }, @@ -4098,6 +4109,11 @@ static int btusb_probe(struct usb_interface *intf, set_bit(BTUSB_USE_ALT3_FOR_WBS, &data->flags); } + if (id->driver_info & BTUSB_ACTIONS_SEMI) { + /* Support is advertised, but not implemented */ + set_bit(HCI_QUIRK_BROKEN_ERR_DATA_REPORTING, &hdev->quirks); + } + if (!reset) set_bit(HCI_QUIRK_RESET_ON_CLOSE, &hdev->quirks); diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c index bbe9cf1cae27..3df8c3606e93 100644 --- a/drivers/bluetooth/hci_qca.c +++ b/drivers/bluetooth/hci_qca.c @@ -128,13 +128,13 @@ struct qca_memdump_event_hdr { __u8 evt; __u8 plen; __u16 opcode; - __u16 seq_no; + __le16 seq_no; __u8 reserved; } __packed; struct qca_dump_size { - u32 dump_size; + __le32 dump_size; } __packed; struct qca_data { @@ -1588,10 +1588,11 @@ static bool qca_wakeup(struct hci_dev *hdev) struct hci_uart *hu = hci_get_drvdata(hdev); bool wakeup; - /* UART driver handles the interrupt from BT SoC.So we need to use - * device handle of UART driver to get the status of device may wakeup. + /* BT SoC attached through the serial bus is handled by the serdev driver. + * So we need to use the device handle of the serdev driver to get the + * status of device may wakeup. */ - wakeup = device_may_wakeup(hu->serdev->ctrl->dev.parent); + wakeup = device_may_wakeup(&hu->serdev->ctrl->dev); bt_dev_dbg(hu->hdev, "wakeup status : %d", wakeup); return wakeup; diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 406b4e26f538..0de0482cd36e 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -167,7 +167,7 @@ struct dma_fence *dma_fence_allocate_private_stub(void) 0, 0); set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, - &dma_fence_stub.flags); + &fence->flags); dma_fence_signal(fence); diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c index a2b0cbc8741c..1e0b016fdc2b 100644 --- a/drivers/firmware/efi/efi.c +++ b/drivers/firmware/efi/efi.c @@ -1007,6 +1007,8 @@ int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size) /* first try to find a slot in an existing linked list entry */ for (prsv = efi_memreserve_root->next; prsv; ) { rsv = memremap(prsv, sizeof(*rsv), MEMREMAP_WB); + if (!rsv) + return -ENOMEM; index = atomic_fetch_add_unless(&rsv->count, 1, rsv->size); if (index < rsv->size) { rsv->entry[index].base = addr; diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c index 0a9aba5f9cef..f178b2984dfb 100644 --- a/drivers/firmware/efi/memattr.c +++ b/drivers/firmware/efi/memattr.c @@ -33,7 +33,7 @@ int __init efi_memattr_init(void) return -ENOMEM; } - if (tbl->version > 1) { + if (tbl->version > 2) { pr_warn("Unexpected EFI Memory Attributes table version %d\n", tbl->version); goto unmap; diff --git a/drivers/fpga/intel-m10-bmc-sec-update.c b/drivers/fpga/intel-m10-bmc-sec-update.c index 79d48852825e..03f1bd81c434 100644 --- a/drivers/fpga/intel-m10-bmc-sec-update.c +++ b/drivers/fpga/intel-m10-bmc-sec-update.c @@ -574,20 +574,27 @@ static int m10bmc_sec_probe(struct platform_device *pdev) len = scnprintf(buf, SEC_UPDATE_LEN_MAX, "secure-update%d", sec->fw_name_id); sec->fw_name = kmemdup_nul(buf, len, GFP_KERNEL); - if (!sec->fw_name) - return -ENOMEM; + if (!sec->fw_name) { + ret = -ENOMEM; + goto fw_name_fail; + } fwl = firmware_upload_register(THIS_MODULE, sec->dev, sec->fw_name, &m10bmc_ops, sec); if (IS_ERR(fwl)) { dev_err(sec->dev, "Firmware Upload driver failed to start\n"); - kfree(sec->fw_name); - xa_erase(&fw_upload_xa, sec->fw_name_id); - return PTR_ERR(fwl); + ret = PTR_ERR(fwl); + goto fw_uploader_fail; } sec->fwl = fwl; return 0; + +fw_uploader_fail: + kfree(sec->fw_name); +fw_name_fail: + xa_erase(&fw_upload_xa, sec->fw_name_id); + return ret; } static int m10bmc_sec_remove(struct platform_device *pdev) diff --git a/drivers/fpga/stratix10-soc.c b/drivers/fpga/stratix10-soc.c index 357cea58ec98..f7f01982a512 100644 --- a/drivers/fpga/stratix10-soc.c +++ b/drivers/fpga/stratix10-soc.c @@ -213,9 +213,9 @@ static int s10_ops_write_init(struct fpga_manager *mgr, /* Allocate buffers from the service layer's pool. */ for (i = 0; i < NUM_SVC_BUFS; i++) { kbuf = stratix10_svc_allocate_memory(priv->chan, SVC_BUF_SIZE); - if (!kbuf) { + if (IS_ERR(kbuf)) { s10_free_buffers(mgr); - ret = -ENOMEM; + ret = PTR_ERR(kbuf); goto init_done; } diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c index b9b57a66e113..66eb102cd88f 100644 --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c @@ -790,8 +790,8 @@ static void gfx_v11_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd, * zero here */ WARN_ON(simd != 0); - /* type 2 wave data */ - dst[(*no_fields)++] = 2; + /* type 3 wave data */ + dst[(*no_fields)++] = 3; dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS); dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO); dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI); diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c index 15eb3658d70e..09fdcd20cb91 100644 --- a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c +++ b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c @@ -337,7 +337,13 @@ const struct nbio_hdp_flush_reg nbio_v4_3_hdp_flush_reg = { static void nbio_v4_3_init_registers(struct amdgpu_device *adev) { - return; + if (adev->ip_versions[NBIO_HWIP][0] == IP_VERSION(4, 3, 0)) { + uint32_t data; + + data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2); + data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK; + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data); + } } static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev) diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c index 5562670b7b52..9eff5f41df9d 100644 --- a/drivers/gpu/drm/amd/amdgpu/soc21.c +++ b/drivers/gpu/drm/amd/amdgpu/soc21.c @@ -640,7 +640,8 @@ static int soc21_common_early_init(void *handle) AMD_CG_SUPPORT_GFX_CGCG | AMD_CG_SUPPORT_GFX_CGLS | AMD_CG_SUPPORT_REPEATER_FGCG | - AMD_CG_SUPPORT_GFX_MGCG; + AMD_CG_SUPPORT_GFX_MGCG | + AMD_CG_SUPPORT_HDP_SD; adev->pg_flags = AMD_PG_SUPPORT_VCN | AMD_PG_SUPPORT_VCN_DPG | AMD_PG_SUPPORT_JPEG; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index af37bc6ed1f5..31bce529f685 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -4501,6 +4501,17 @@ DEVICE_ATTR_WO(s3_debug); static int dm_early_init(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; + struct amdgpu_mode_info *mode_info = &adev->mode_info; + struct atom_context *ctx = mode_info->atom_context; + int index = GetIndexIntoMasterTable(DATA, Object_Header); + u16 data_offset; + + /* if there is no object header, skip DM */ + if (!amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) { + adev->harvest_ip_mask |= AMD_HARVEST_IP_DMU_MASK; + dev_info(adev->dev, "No object header, skipping DM\n"); + return -ENOENT; + } switch (adev->asic_type) { #if defined(CONFIG_DRM_AMD_DC_SI) diff --git a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c index f9ea1e86707f..79850a68f62a 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c +++ b/drivers/gpu/drm/amd/display/dc/dcn314/dcn314_resource.c @@ -874,8 +874,9 @@ static const struct dc_plane_cap plane_cap = { }, // 6:1 downscaling ratio: 1000/6 = 166.666 + // 4:1 downscaling ratio for ARGB888 to prevent underflow during P010 playback: 1000/4 = 250 .max_downscale_factor = { - .argb8888 = 167, + .argb8888 = 250, .nv12 = 167, .fp16 = 167 }, @@ -1763,7 +1764,7 @@ static bool dcn314_resource_construct( pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE; pool->base.pipe_count = pool->base.res_cap->num_timing_generator; pool->base.mpcc_count = pool->base.res_cap->num_timing_generator; - dc->caps.max_downscale_ratio = 600; + dc->caps.max_downscale_ratio = 400; dc->caps.i2c_speed_in_khz = 100; dc->caps.i2c_speed_in_khz_hdcp = 100; dc->caps.max_cursor_size = 256; diff --git a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c index dc4649458567..a4e9fd5307c6 100644 --- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c +++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_init.c @@ -94,7 +94,7 @@ static const struct hw_sequencer_funcs dcn32_funcs = { .get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync, .calc_vupdate_position = dcn10_calc_vupdate_position, .apply_idle_power_optimizations = dcn32_apply_idle_power_optimizations, - .does_plane_fit_in_mall = dcn30_does_plane_fit_in_mall, + .does_plane_fit_in_mall = NULL, .set_backlight_level = dcn21_set_backlight_level, .set_abm_immediate_disable = dcn21_set_abm_immediate_disable, .hardware_release = dcn30_hardware_release, diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c index 950669f2c10d..cb7c0c878423 100644 --- a/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn314/display_mode_vba_314.c @@ -3183,7 +3183,7 @@ static void DISPCLKDPPCLKDCFCLKDeepSleepPrefetchParametersWatermarksAndPerforman } else { v->MIN_DST_Y_NEXT_START[k] = v->VTotal[k] - v->VFrontPorch[k] + v->VTotal[k] - v->VActive[k] - v->VStartup[k]; } - v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / (double)v->HTotal[k] / v->PixelClock[k], 1.0) / 4.0; + v->MIN_DST_Y_NEXT_START[k] += dml_floor(4.0 * v->TSetup[k] / ((double)v->HTotal[k] / v->PixelClock[k]), 1.0) / 4.0; if (((v->VUpdateOffsetPix[k] + v->VUpdateWidthPix[k] + v->VReadyOffsetPix[k]) / v->HTotal[k]) <= (isInterlaceTiming ? dml_floor((v->VTotal[k] - v->VActive[k] - v->VFrontPorch[k] - v->VStartup[k]) / 2.0, 1.0) : diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c index 4a122925c3ae..92c18bfb98b3 100644 --- a/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c +++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_srv.c @@ -532,6 +532,9 @@ enum dmub_status dmub_srv_hw_init(struct dmub_srv *dmub, if (dmub->hw_funcs.reset) dmub->hw_funcs.reset(dmub); + /* reset the cache of the last wptr as well now that hw is reset */ + dmub->inbox1_last_wptr = 0; + cw0.offset.quad_part = inst_fb->gpu_addr; cw0.region.base = DMUB_CW0_BASE; cw0.region.top = cw0.region.base + inst_fb->size - 1; @@ -649,6 +652,15 @@ enum dmub_status dmub_srv_hw_reset(struct dmub_srv *dmub) if (dmub->hw_funcs.reset) dmub->hw_funcs.reset(dmub); + /* mailboxes have been reset in hw, so reset the sw state as well */ + dmub->inbox1_last_wptr = 0; + dmub->inbox1_rb.wrpt = 0; + dmub->inbox1_rb.rptr = 0; + dmub->outbox0_rb.wrpt = 0; + dmub->outbox0_rb.rptr = 0; + dmub->outbox1_rb.wrpt = 0; + dmub->outbox1_rb.rptr = 0; + dmub->hw_init = false; return DMUB_STATUS_OK; diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c index 236657eece47..a9170360d7e8 100644 --- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c @@ -2007,14 +2007,16 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_ gc_ver == IP_VERSION(10, 3, 0) || gc_ver == IP_VERSION(10, 1, 2) || gc_ver == IP_VERSION(11, 0, 0) || - gc_ver == IP_VERSION(11, 0, 2))) + gc_ver == IP_VERSION(11, 0, 2) || + gc_ver == IP_VERSION(11, 0, 3))) *states = ATTR_STATE_UNSUPPORTED; } else if (DEVICE_ATTR_IS(pp_dpm_dclk)) { if (!(gc_ver == IP_VERSION(10, 3, 1) || gc_ver == IP_VERSION(10, 3, 0) || gc_ver == IP_VERSION(10, 1, 2) || gc_ver == IP_VERSION(11, 0, 0) || - gc_ver == IP_VERSION(11, 0, 2))) + gc_ver == IP_VERSION(11, 0, 2) || + gc_ver == IP_VERSION(11, 0, 3))) *states = ATTR_STATE_UNSUPPORTED; } else if (DEVICE_ATTR_IS(pp_power_profile_mode)) { if (amdgpu_dpm_get_power_profile_mode(adev, NULL) == -EOPNOTSUPP) diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c index ca3beb5d8f27..6ab155023592 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c @@ -1500,6 +1500,20 @@ static int smu_disable_dpms(struct smu_context *smu) } /* + * For SMU 13.0.4/11, PMFW will handle the features disablement properly + * for gpu reset case. Driver involvement is unnecessary. + */ + if (amdgpu_in_reset(adev)) { + switch (adev->ip_versions[MP1_HWIP][0]) { + case IP_VERSION(13, 0, 4): + case IP_VERSION(13, 0, 11): + return 0; + default: + break; + } + } + + /* * For gpu reset, runpm and hibernation through BACO, * BACO feature has to be kept enabled. */ diff --git a/drivers/gpu/drm/i915/display/intel_cdclk.c b/drivers/gpu/drm/i915/display/intel_cdclk.c index b74e36d76013..407a477939e5 100644 --- a/drivers/gpu/drm/i915/display/intel_cdclk.c +++ b/drivers/gpu/drm/i915/display/intel_cdclk.c @@ -1319,7 +1319,7 @@ static const struct intel_cdclk_vals adlp_cdclk_table[] = { { .refclk = 24000, .cdclk = 192000, .divider = 2, .ratio = 16 }, { .refclk = 24000, .cdclk = 312000, .divider = 2, .ratio = 26 }, { .refclk = 24000, .cdclk = 552000, .divider = 2, .ratio = 46 }, - { .refclk = 24400, .cdclk = 648000, .divider = 2, .ratio = 54 }, + { .refclk = 24000, .cdclk = 648000, .divider = 2, .ratio = 54 }, { .refclk = 38400, .cdclk = 179200, .divider = 3, .ratio = 14 }, { .refclk = 38400, .cdclk = 192000, .divider = 2, .ratio = 10 }, diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c index 6250de9b9196..e4b78ab4773b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c @@ -1861,11 +1861,19 @@ static int get_ppgtt(struct drm_i915_file_private *file_priv, vm = ctx->vm; GEM_BUG_ON(!vm); + /* + * Get a reference for the allocated handle. Once the handle is + * visible in the vm_xa table, userspace could try to close it + * from under our feet, so we need to hold the extra reference + * first. + */ + i915_vm_get(vm); + err = xa_alloc(&file_priv->vm_xa, &id, vm, xa_limit_32b, GFP_KERNEL); - if (err) + if (err) { + i915_vm_put(vm); return err; - - i915_vm_get(vm); + } GEM_BUG_ON(id == 0); /* reserved for invalid/unassigned ppgtt */ args->value = id; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c index fd42b89b7162..bc21b1c2350a 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c @@ -305,10 +305,6 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, spin_unlock(&obj->vma.lock); obj->tiling_and_stride = tiling | stride; - i915_gem_object_unlock(obj); - - /* Force the fence to be reacquired for GTT access */ - i915_gem_object_release_mmap_gtt(obj); /* Try to preallocate memory required to save swizzling on put-pages */ if (i915_gem_object_needs_bit17_swizzle(obj)) { @@ -321,6 +317,11 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj, obj->bit_17 = NULL; } + i915_gem_object_unlock(obj); + + /* Force the fence to be reacquired for GTT access */ + i915_gem_object_release_mmap_gtt(obj); + return 0; } diff --git a/drivers/gpu/drm/i915/gt/intel_context.c b/drivers/gpu/drm/i915/gt/intel_context.c index e94365b08f1e..2aa63ec521b8 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.c +++ b/drivers/gpu/drm/i915/gt/intel_context.c @@ -528,7 +528,7 @@ retry: return rq; } -struct i915_request *intel_context_find_active_request(struct intel_context *ce) +struct i915_request *intel_context_get_active_request(struct intel_context *ce) { struct intel_context *parent = intel_context_to_parent(ce); struct i915_request *rq, *active = NULL; @@ -552,6 +552,8 @@ struct i915_request *intel_context_find_active_request(struct intel_context *ce) active = rq; } + if (active) + active = i915_request_get_rcu(active); spin_unlock_irqrestore(&parent->guc_state.lock, flags); return active; diff --git a/drivers/gpu/drm/i915/gt/intel_context.h b/drivers/gpu/drm/i915/gt/intel_context.h index fb62b7b8cbcd..0a8d553da3f4 100644 --- a/drivers/gpu/drm/i915/gt/intel_context.h +++ b/drivers/gpu/drm/i915/gt/intel_context.h @@ -268,8 +268,7 @@ int intel_context_prepare_remote_request(struct intel_context *ce, struct i915_request *intel_context_create_request(struct intel_context *ce); -struct i915_request * -intel_context_find_active_request(struct intel_context *ce); +struct i915_request *intel_context_get_active_request(struct intel_context *ce); static inline bool intel_context_is_barrier(const struct intel_context *ce) { diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h index cbc8b857d5f7..7a4504ea35c3 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine.h +++ b/drivers/gpu/drm/i915/gt/intel_engine.h @@ -248,8 +248,8 @@ void intel_engine_dump_active_requests(struct list_head *requests, ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine, ktime_t *now); -struct i915_request * -intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine); +void intel_engine_get_hung_entity(struct intel_engine_cs *engine, + struct intel_context **ce, struct i915_request **rq); u32 intel_engine_context_size(struct intel_gt *gt, u8 class); struct intel_context * diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index c33e0d72d670..d37931e16fd9 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -2094,17 +2094,6 @@ static void print_request_ring(struct drm_printer *m, struct i915_request *rq) } } -static unsigned long list_count(struct list_head *list) -{ - struct list_head *pos; - unsigned long count = 0; - - list_for_each(pos, list) - count++; - - return count; -} - static unsigned long read_ul(void *p, size_t x) { return *(unsigned long *)(p + x); @@ -2196,11 +2185,11 @@ void intel_engine_dump_active_requests(struct list_head *requests, } } -static void engine_dump_active_requests(struct intel_engine_cs *engine, struct drm_printer *m) +static void engine_dump_active_requests(struct intel_engine_cs *engine, + struct drm_printer *m) { + struct intel_context *hung_ce = NULL; struct i915_request *hung_rq = NULL; - struct intel_context *ce; - bool guc; /* * No need for an engine->irq_seqno_barrier() before the seqno reads. @@ -2209,27 +2198,22 @@ static void engine_dump_active_requests(struct intel_engine_cs *engine, struct d * But the intention here is just to report an instantaneous snapshot * so that's fine. */ - lockdep_assert_held(&engine->sched_engine->lock); + intel_engine_get_hung_entity(engine, &hung_ce, &hung_rq); drm_printf(m, "\tRequests:\n"); - guc = intel_uc_uses_guc_submission(&engine->gt->uc); - if (guc) { - ce = intel_engine_get_hung_context(engine); - if (ce) - hung_rq = intel_context_find_active_request(ce); - } else { - hung_rq = intel_engine_execlist_find_hung_request(engine); - } - if (hung_rq) engine_dump_request(hung_rq, m, "\t\thung"); + else if (hung_ce) + drm_printf(m, "\t\tGot hung ce but no hung rq!\n"); - if (guc) + if (intel_uc_uses_guc_submission(&engine->gt->uc)) intel_guc_dump_active_requests(engine, hung_rq, m); else - intel_engine_dump_active_requests(&engine->sched_engine->requests, - hung_rq, m); + intel_execlists_dump_active_requests(engine, hung_rq, m); + + if (hung_rq) + i915_request_put(hung_rq); } void intel_engine_dump(struct intel_engine_cs *engine, @@ -2239,7 +2223,6 @@ void intel_engine_dump(struct intel_engine_cs *engine, struct i915_gpu_error * const error = &engine->i915->gpu_error; struct i915_request *rq; intel_wakeref_t wakeref; - unsigned long flags; ktime_t dummy; if (header) { @@ -2276,13 +2259,8 @@ void intel_engine_dump(struct intel_engine_cs *engine, i915_reset_count(error)); print_properties(engine, m); - spin_lock_irqsave(&engine->sched_engine->lock, flags); engine_dump_active_requests(engine, m); - drm_printf(m, "\tOn hold?: %lu\n", - list_count(&engine->sched_engine->hold)); - spin_unlock_irqrestore(&engine->sched_engine->lock, flags); - drm_printf(m, "\tMMIO base: 0x%08x\n", engine->mmio_base); wakeref = intel_runtime_pm_get_if_in_use(engine->uncore->rpm); if (wakeref) { @@ -2328,8 +2306,7 @@ intel_engine_create_virtual(struct intel_engine_cs **siblings, return siblings[0]->cops->create_virtual(siblings, count, flags); } -struct i915_request * -intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine) +static struct i915_request *engine_execlist_find_hung_request(struct intel_engine_cs *engine) { struct i915_request *request, *active = NULL; @@ -2381,6 +2358,33 @@ intel_engine_execlist_find_hung_request(struct intel_engine_cs *engine) return active; } +void intel_engine_get_hung_entity(struct intel_engine_cs *engine, + struct intel_context **ce, struct i915_request **rq) +{ + unsigned long flags; + + *ce = intel_engine_get_hung_context(engine); + if (*ce) { + intel_engine_clear_hung_context(engine); + + *rq = intel_context_get_active_request(*ce); + return; + } + + /* + * Getting here with GuC enabled means it is a forced error capture + * with no actual hang. So, no need to attempt the execlist search. + */ + if (intel_uc_uses_guc_submission(&engine->gt->uc)) + return; + + spin_lock_irqsave(&engine->sched_engine->lock, flags); + *rq = engine_execlist_find_hung_request(engine); + if (*rq) + *rq = i915_request_get_rcu(*rq); + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); +} + void xehp_enable_ccs_engines(struct intel_engine_cs *engine) { /* diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c index 2daffa7c7dfd..21cb5b69d82e 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c @@ -4148,6 +4148,33 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine, spin_unlock_irqrestore(&sched_engine->lock, flags); } +static unsigned long list_count(struct list_head *list) +{ + struct list_head *pos; + unsigned long count = 0; + + list_for_each(pos, list) + count++; + + return count; +} + +void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, + struct i915_request *hung_rq, + struct drm_printer *m) +{ + unsigned long flags; + + spin_lock_irqsave(&engine->sched_engine->lock, flags); + + intel_engine_dump_active_requests(&engine->sched_engine->requests, hung_rq, m); + + drm_printf(m, "\tOn hold?: %lu\n", + list_count(&engine->sched_engine->hold)); + + spin_unlock_irqrestore(&engine->sched_engine->lock, flags); +} + #if IS_ENABLED(CONFIG_DRM_I915_SELFTEST) #include "selftest_execlists.c" #endif diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h index a1aa92c983a5..d2c7d45ea062 100644 --- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.h +++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.h @@ -32,6 +32,10 @@ void intel_execlists_show_requests(struct intel_engine_cs *engine, int indent), unsigned int max); +void intel_execlists_dump_active_requests(struct intel_engine_cs *engine, + struct i915_request *hung_rq, + struct drm_printer *m); + bool intel_engine_in_execlists_submission_mode(const struct intel_engine_cs *engine); diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c index 0a42f1807f52..c10977cb06b9 100644 --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c @@ -1702,7 +1702,7 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st goto next_context; guilty = false; - rq = intel_context_find_active_request(ce); + rq = intel_context_get_active_request(ce); if (!rq) { head = ce->ring->tail; goto out_replay; @@ -1715,6 +1715,7 @@ static void __guc_reset_context(struct intel_context *ce, intel_engine_mask_t st head = intel_ring_wrap(ce->ring, rq->head); __i915_request_reset(rq, guilty); + i915_request_put(rq); out_replay: guc_reset_state(ce, head, guilty); next_context: @@ -4817,6 +4818,8 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine) xa_lock_irqsave(&guc->context_lookup, flags); xa_for_each(&guc->context_lookup, index, ce) { + bool found; + if (!kref_get_unless_zero(&ce->ref)) continue; @@ -4833,10 +4836,18 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine) goto next; } + found = false; + spin_lock(&ce->guc_state.lock); list_for_each_entry(rq, &ce->guc_state.requests, sched.link) { if (i915_test_request_state(rq) != I915_REQUEST_ACTIVE) continue; + found = true; + break; + } + spin_unlock(&ce->guc_state.lock); + + if (found) { intel_engine_set_hung_context(engine, ce); /* Can only cope with one hang at a time... */ @@ -4844,6 +4855,7 @@ void intel_guc_find_hung_context(struct intel_engine_cs *engine) xa_lock(&guc->context_lookup); goto done; } + next: intel_context_put(ce); xa_lock(&guc->context_lookup); diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c index 9d5d5a397b64..b20bd6365615 100644 --- a/drivers/gpu/drm/i915/i915_gpu_error.c +++ b/drivers/gpu/drm/i915/i915_gpu_error.c @@ -1596,43 +1596,20 @@ capture_engine(struct intel_engine_cs *engine, { struct intel_engine_capture_vma *capture = NULL; struct intel_engine_coredump *ee; - struct intel_context *ce; + struct intel_context *ce = NULL; struct i915_request *rq = NULL; - unsigned long flags; ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL, dump_flags); if (!ee) return NULL; - ce = intel_engine_get_hung_context(engine); - if (ce) { - intel_engine_clear_hung_context(engine); - rq = intel_context_find_active_request(ce); - if (!rq || !i915_request_started(rq)) - goto no_request_capture; - } else { - /* - * Getting here with GuC enabled means it is a forced error capture - * with no actual hang. So, no need to attempt the execlist search. - */ - if (!intel_uc_uses_guc_submission(&engine->gt->uc)) { - spin_lock_irqsave(&engine->sched_engine->lock, flags); - rq = intel_engine_execlist_find_hung_request(engine); - spin_unlock_irqrestore(&engine->sched_engine->lock, - flags); - } - } - if (rq) - rq = i915_request_get_rcu(rq); - - if (!rq) + intel_engine_get_hung_entity(engine, &ce, &rq); + if (!rq || !i915_request_started(rq)) goto no_request_capture; capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL); - if (!capture) { - i915_request_put(rq); + if (!capture) goto no_request_capture; - } if (dump_flags & CORE_DUMP_FLAG_IS_GUC_CAPTURE) intel_guc_capture_get_matching_node(engine->gt, ee, ce); @@ -1642,6 +1619,8 @@ capture_engine(struct intel_engine_cs *engine, return ee; no_request_capture: + if (rq) + i915_request_put(rq); kfree(ee); return NULL; } diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h index 40768373cdd9..c5a4f49ee206 100644 --- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h +++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/fb.h @@ -97,6 +97,7 @@ int gp100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct n int gp102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); int gp10b_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); int gv100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); +int tu102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); int ga100_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); int ga102_fb_new(struct nvkm_device *, enum nvkm_subdev_type, int inst, struct nvkm_fb **); diff --git a/drivers/gpu/drm/nouveau/nvkm/core/firmware.c b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c index fcf2a002f6cb..91fb494d4009 100644 --- a/drivers/gpu/drm/nouveau/nvkm/core/firmware.c +++ b/drivers/gpu/drm/nouveau/nvkm/core/firmware.c @@ -151,6 +151,9 @@ nvkm_firmware_mem_page(struct nvkm_memory *memory) static enum nvkm_memory_target nvkm_firmware_mem_target(struct nvkm_memory *memory) { + if (nvkm_firmware_mem(memory)->device->func->tegra) + return NVKM_MEM_TARGET_NCOH; + return NVKM_MEM_TARGET_HOST; } diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c index 364fea320cb3..1c81e5b34d29 100644 --- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c +++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c @@ -2405,7 +2405,7 @@ nv162_chipset = { .bus = { 0x00000001, gf100_bus_new }, .devinit = { 0x00000001, tu102_devinit_new }, .fault = { 0x00000001, tu102_fault_new }, - .fb = { 0x00000001, gv100_fb_new }, + .fb = { 0x00000001, tu102_fb_new }, .fuse = { 0x00000001, gm107_fuse_new }, .gpio = { 0x00000001, gk104_gpio_new }, .gsp = { 0x00000001, gv100_gsp_new }, @@ -2440,7 +2440,7 @@ nv164_chipset = { .bus = { 0x00000001, gf100_bus_new }, .devinit = { 0x00000001, tu102_devinit_new }, .fault = { 0x00000001, tu102_fault_new }, - .fb = { 0x00000001, gv100_fb_new }, + .fb = { 0x00000001, tu102_fb_new }, .fuse = { 0x00000001, gm107_fuse_new }, .gpio = { 0x00000001, gk104_gpio_new }, .gsp = { 0x00000001, gv100_gsp_new }, @@ -2475,7 +2475,7 @@ nv166_chipset = { .bus = { 0x00000001, gf100_bus_new }, .devinit = { 0x00000001, tu102_devinit_new }, .fault = { 0x00000001, tu102_fault_new }, - .fb = { 0x00000001, gv100_fb_new }, + .fb = { 0x00000001, tu102_fb_new }, .fuse = { 0x00000001, gm107_fuse_new }, .gpio = { 0x00000001, gk104_gpio_new }, .gsp = { 0x00000001, gv100_gsp_new }, @@ -2510,7 +2510,7 @@ nv167_chipset = { .bus = { 0x00000001, gf100_bus_new }, .devinit = { 0x00000001, tu102_devinit_new }, .fault = { 0x00000001, tu102_fault_new }, - .fb = { 0x00000001, gv100_fb_new }, + .fb = { 0x00000001, tu102_fb_new }, .fuse = { 0x00000001, gm107_fuse_new }, .gpio = { 0x00000001, gk104_gpio_new }, .gsp = { 0x00000001, gv100_gsp_new }, @@ -2545,7 +2545,7 @@ nv168_chipset = { .bus = { 0x00000001, gf100_bus_new }, .devinit = { 0x00000001, tu102_devinit_new }, .fault = { 0x00000001, tu102_fault_new }, - .fb = { 0x00000001, gv100_fb_new }, + .fb = { 0x00000001, tu102_fb_new }, .fuse = { 0x00000001, gm107_fuse_new }, .gpio = { 0x00000001, gk104_gpio_new }, .gsp = { 0x00000001, gv100_gsp_new }, diff --git a/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c b/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c index 393ade9f7e6c..b7da3ab44c27 100644 --- a/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c +++ b/drivers/gpu/drm/nouveau/nvkm/falcon/gm200.c @@ -48,6 +48,16 @@ gm200_flcn_pio_dmem_rd(struct nvkm_falcon *falcon, u8 port, const u8 *img, int l img += 4; len -= 4; } + + /* Sigh. Tegra PMU FW's init message... */ + if (len) { + u32 data = nvkm_falcon_rd32(falcon, 0x1c4 + (port * 8)); + + while (len--) { + *(u8 *)img++ = data & 0xff; + data >>= 8; + } + } } static void @@ -64,6 +74,8 @@ gm200_flcn_pio_dmem_wr(struct nvkm_falcon *falcon, u8 port, const u8 *img, int l img += 4; len -= 4; } + + WARN_ON(len); } static void @@ -74,7 +86,7 @@ gm200_flcn_pio_dmem_wr_init(struct nvkm_falcon *falcon, u8 port, bool sec, u32 d const struct nvkm_falcon_func_pio gm200_flcn_dmem_pio = { - .min = 4, + .min = 1, .max = 0x100, .wr_init = gm200_flcn_pio_dmem_wr_init, .wr = gm200_flcn_pio_dmem_wr, diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c index 634f64f88fc8..81a1ad2c88a7 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/tu102.c @@ -65,10 +65,33 @@ tu102_devinit_pll_set(struct nvkm_devinit *init, u32 type, u32 freq) return ret; } +static int +tu102_devinit_wait(struct nvkm_device *device) +{ + unsigned timeout = 50 + 2000; + + do { + if (nvkm_rd32(device, 0x118128) & 0x00000001) { + if ((nvkm_rd32(device, 0x118234) & 0x000000ff) == 0xff) + return 0; + } + + usleep_range(1000, 2000); + } while (timeout--); + + return -ETIMEDOUT; +} + int tu102_devinit_post(struct nvkm_devinit *base, bool post) { struct nv50_devinit *init = nv50_devinit(base); + int ret; + + ret = tu102_devinit_wait(init->base.subdev.device); + if (ret) + return ret; + gm200_devinit_preos(init, post); return 0; } diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild index 5d0bab8ecb43..6ba5120a2ebe 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/Kbuild @@ -32,6 +32,7 @@ nvkm-y += nvkm/subdev/fb/gp100.o nvkm-y += nvkm/subdev/fb/gp102.o nvkm-y += nvkm/subdev/fb/gp10b.o nvkm-y += nvkm/subdev/fb/gv100.o +nvkm-y += nvkm/subdev/fb/tu102.o nvkm-y += nvkm/subdev/fb/ga100.o nvkm-y += nvkm/subdev/fb/ga102.o diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c index 8b7c8ea5e8a5..5a21b0ae4595 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ga102.c @@ -40,12 +40,6 @@ ga102_fb_vpr_scrub(struct nvkm_fb *fb) return ret; } -static bool -ga102_fb_vpr_scrub_required(struct nvkm_fb *fb) -{ - return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; -} - static const struct nvkm_fb_func ga102_fb = { .dtor = gf100_fb_dtor, @@ -56,7 +50,7 @@ ga102_fb = { .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, .ram_new = ga102_ram_new, .default_bigpage = 16, - .vpr.scrub_required = ga102_fb_vpr_scrub_required, + .vpr.scrub_required = tu102_fb_vpr_scrub_required, .vpr.scrub = ga102_fb_vpr_scrub, }; diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c index 1f0126437c1a..0e3c0a8f5d71 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/gv100.c @@ -49,8 +49,3 @@ gv100_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, s } MODULE_FIRMWARE("nvidia/gv100/nvdec/scrubber.bin"); -MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); -MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); -MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); -MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); -MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin"); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h index ac03eac0f261..f517751f94ac 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/priv.h @@ -89,4 +89,6 @@ bool gp102_fb_vpr_scrub_required(struct nvkm_fb *); int gp102_fb_vpr_scrub(struct nvkm_fb *); int gv100_fb_init_page(struct nvkm_fb *); + +bool tu102_fb_vpr_scrub_required(struct nvkm_fb *); #endif diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c new file mode 100644 index 000000000000..be82af0364ee --- /dev/null +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/tu102.c @@ -0,0 +1,55 @@ +/* + * Copyright 2018 Red Hat Inc. + * + * Permission is hereby granted, free of charge, to any person obtaining a + * copy of this software and associated documentation files (the "Software"), + * to deal in the Software without restriction, including without limitation + * the rights to use, copy, modify, merge, publish, distribute, sublicense, + * and/or sell copies of the Software, and to permit persons to whom the + * Software is furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR + * OTHER DEALINGS IN THE SOFTWARE. + */ +#include "gf100.h" +#include "ram.h" + +bool +tu102_fb_vpr_scrub_required(struct nvkm_fb *fb) +{ + return (nvkm_rd32(fb->subdev.device, 0x1fa80c) & 0x00000010) != 0; +} + +static const struct nvkm_fb_func +tu102_fb = { + .dtor = gf100_fb_dtor, + .oneinit = gf100_fb_oneinit, + .init = gm200_fb_init, + .init_page = gv100_fb_init_page, + .init_unkn = gp100_fb_init_unkn, + .sysmem.flush_page_init = gf100_fb_sysmem_flush_page_init, + .vpr.scrub_required = tu102_fb_vpr_scrub_required, + .vpr.scrub = gp102_fb_vpr_scrub, + .ram_new = gp100_ram_new, + .default_bigpage = 16, +}; + +int +tu102_fb_new(struct nvkm_device *device, enum nvkm_subdev_type type, int inst, struct nvkm_fb **pfb) +{ + return gp102_fb_new_(&tu102_fb, device, type, inst, pfb); +} + +MODULE_FIRMWARE("nvidia/tu102/nvdec/scrubber.bin"); +MODULE_FIRMWARE("nvidia/tu104/nvdec/scrubber.bin"); +MODULE_FIRMWARE("nvidia/tu106/nvdec/scrubber.bin"); +MODULE_FIRMWARE("nvidia/tu116/nvdec/scrubber.bin"); +MODULE_FIRMWARE("nvidia/tu117/nvdec/scrubber.bin"); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c index a72403777329..2ed04da3621d 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gm20b.c @@ -225,7 +225,7 @@ gm20b_pmu_init(struct nvkm_pmu *pmu) pmu->initmsg_received = false; - nvkm_falcon_load_dmem(falcon, &args, addr_args, sizeof(args), 0); + nvkm_falcon_pio_wr(falcon, (u8 *)&args, 0, 0, DMEM, addr_args, sizeof(args), 0, false); nvkm_falcon_start(falcon); return 0; } diff --git a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c index 857a2f0420d7..c924f1124ebc 100644 --- a/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c +++ b/drivers/gpu/drm/panel/panel-boe-tv101wum-nl6.c @@ -1193,14 +1193,11 @@ static int boe_panel_enter_sleep_mode(struct boe_panel *boe) return 0; } -static int boe_panel_unprepare(struct drm_panel *panel) +static int boe_panel_disable(struct drm_panel *panel) { struct boe_panel *boe = to_boe_panel(panel); int ret; - if (!boe->prepared) - return 0; - ret = boe_panel_enter_sleep_mode(boe); if (ret < 0) { dev_err(panel->dev, "failed to set panel off: %d\n", ret); @@ -1209,6 +1206,16 @@ static int boe_panel_unprepare(struct drm_panel *panel) msleep(150); + return 0; +} + +static int boe_panel_unprepare(struct drm_panel *panel) +{ + struct boe_panel *boe = to_boe_panel(panel); + + if (!boe->prepared) + return 0; + if (boe->desc->discharge_on_disable) { regulator_disable(boe->avee); regulator_disable(boe->avdd); @@ -1528,6 +1535,7 @@ static enum drm_panel_orientation boe_panel_get_orientation(struct drm_panel *pa } static const struct drm_panel_funcs boe_panel_funcs = { + .disable = boe_panel_disable, .unprepare = boe_panel_unprepare, .prepare = boe_panel_prepare, .enable = boe_panel_enable, diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c index 53464afc2b9a..91f69e62430b 100644 --- a/drivers/gpu/drm/solomon/ssd130x.c +++ b/drivers/gpu/drm/solomon/ssd130x.c @@ -656,18 +656,8 @@ static const struct drm_crtc_helper_funcs ssd130x_crtc_helper_funcs = { .atomic_check = drm_crtc_helper_atomic_check, }; -static void ssd130x_crtc_reset(struct drm_crtc *crtc) -{ - struct drm_device *drm = crtc->dev; - struct ssd130x_device *ssd130x = drm_to_ssd130x(drm); - - ssd130x_init(ssd130x); - - drm_atomic_helper_crtc_reset(crtc); -} - static const struct drm_crtc_funcs ssd130x_crtc_funcs = { - .reset = ssd130x_crtc_reset, + .reset = drm_atomic_helper_crtc_reset, .destroy = drm_crtc_cleanup, .set_config = drm_atomic_helper_set_config, .page_flip = drm_atomic_helper_page_flip, @@ -686,6 +676,12 @@ static void ssd130x_encoder_helper_atomic_enable(struct drm_encoder *encoder, if (ret) return; + ret = ssd130x_init(ssd130x); + if (ret) { + ssd130x_power_off(ssd130x); + return; + } + ssd130x_write_cmd(ssd130x, 1, SSD130X_DISPLAY_ON); backlight_enable(ssd130x->bl_dev); diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c index 12a00d644b61..55744216392b 100644 --- a/drivers/gpu/drm/vc4/vc4_hdmi.c +++ b/drivers/gpu/drm/vc4/vc4_hdmi.c @@ -3018,7 +3018,8 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi) } vc4_hdmi->cec_adap = cec_allocate_adapter(&vc4_hdmi_cec_adap_ops, - vc4_hdmi, "vc4", + vc4_hdmi, + vc4_hdmi->variant->card_name, CEC_CAP_DEFAULTS | CEC_CAP_CONNECTOR_INFO, 1); ret = PTR_ERR_OR_ZERO(vc4_hdmi->cec_adap); diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_client.c b/drivers/hid/amd-sfh-hid/amd_sfh_client.c index 1fb0f7105fb2..c751d12f5df8 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_client.c +++ b/drivers/hid/amd-sfh-hid/amd_sfh_client.c @@ -227,6 +227,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata) cl_data->num_hid_devices = amd_mp2_get_sensor_num(privdata, &cl_data->sensor_idx[0]); if (cl_data->num_hid_devices == 0) return -ENODEV; + cl_data->is_any_sensor_enabled = false; INIT_DELAYED_WORK(&cl_data->work, amd_sfh_work); INIT_DELAYED_WORK(&cl_data->work_buffer, amd_sfh_work_buffer); @@ -287,6 +288,7 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata) status = amd_sfh_wait_for_response (privdata, cl_data->sensor_idx[i], SENSOR_ENABLED); if (status == SENSOR_ENABLED) { + cl_data->is_any_sensor_enabled = true; cl_data->sensor_sts[i] = SENSOR_ENABLED; rc = amdtp_hid_probe(cl_data->cur_hid_dev, cl_data); if (rc) { @@ -301,19 +303,26 @@ int amd_sfh_hid_client_init(struct amd_mp2_dev *privdata) cl_data->sensor_sts[i]); goto cleanup; } + } else { + cl_data->sensor_sts[i] = SENSOR_DISABLED; + dev_dbg(dev, "sid 0x%x (%s) status 0x%x\n", + cl_data->sensor_idx[i], + get_sensor_name(cl_data->sensor_idx[i]), + cl_data->sensor_sts[i]); } dev_dbg(dev, "sid 0x%x (%s) status 0x%x\n", cl_data->sensor_idx[i], get_sensor_name(cl_data->sensor_idx[i]), cl_data->sensor_sts[i]); } - if (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0) { + if (!cl_data->is_any_sensor_enabled || + (mp2_ops->discovery_status && mp2_ops->discovery_status(privdata) == 0)) { amd_sfh_hid_client_deinit(privdata); for (i = 0; i < cl_data->num_hid_devices; i++) { devm_kfree(dev, cl_data->feature_report[i]); devm_kfree(dev, in_data->input_report[i]); devm_kfree(dev, cl_data->report_descr[i]); } - dev_warn(dev, "Failed to discover, sensors not enabled\n"); + dev_warn(dev, "Failed to discover, sensors not enabled is %d\n", cl_data->is_any_sensor_enabled); return -EOPNOTSUPP; } schedule_delayed_work(&cl_data->work_buffer, msecs_to_jiffies(AMD_SFH_IDLE_LOOP)); diff --git a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h index 3754fb423e3a..528036892c9d 100644 --- a/drivers/hid/amd-sfh-hid/amd_sfh_hid.h +++ b/drivers/hid/amd-sfh-hid/amd_sfh_hid.h @@ -32,6 +32,7 @@ struct amd_input_data { struct amdtp_cl_data { u8 init_done; u32 cur_hid_dev; + bool is_any_sensor_enabled; u32 hid_dev_count; u32 num_hid_devices; struct device_info *hid_devices; diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c index 3e1803592bd4..5c72aef3d3dd 100644 --- a/drivers/hid/hid-core.c +++ b/drivers/hid/hid-core.c @@ -1202,6 +1202,7 @@ int hid_open_report(struct hid_device *device) __u8 *end; __u8 *next; int ret; + int i; static int (*dispatch_type[])(struct hid_parser *parser, struct hid_item *item) = { hid_parser_main, @@ -1252,6 +1253,8 @@ int hid_open_report(struct hid_device *device) goto err; } device->collection_size = HID_DEFAULT_NUM_COLLECTIONS; + for (i = 0; i < HID_DEFAULT_NUM_COLLECTIONS; i++) + device->collection[i].parent_idx = -1; ret = -EINVAL; while ((next = fetch_item(start, end, &item)) != NULL) { diff --git a/drivers/hid/hid-elecom.c b/drivers/hid/hid-elecom.c index e59e9911fc37..4fa45ee77503 100644 --- a/drivers/hid/hid-elecom.c +++ b/drivers/hid/hid-elecom.c @@ -12,6 +12,7 @@ * Copyright (c) 2017 Alex Manoussakis <[email protected]> * Copyright (c) 2017 Tomasz Kramkowski <[email protected]> * Copyright (c) 2020 YOSHIOKA Takuma <[email protected]> + * Copyright (c) 2022 Takahiro Fujii <[email protected]> */ /* @@ -89,7 +90,7 @@ static __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc, case USB_DEVICE_ID_ELECOM_M_DT1URBK: case USB_DEVICE_ID_ELECOM_M_DT1DRBK: case USB_DEVICE_ID_ELECOM_M_HT1URBK: - case USB_DEVICE_ID_ELECOM_M_HT1DRBK: + case USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D: /* * Report descriptor format: * 12: button bit count @@ -99,6 +100,16 @@ static __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc, */ mouse_button_fixup(hdev, rdesc, *rsize, 12, 30, 14, 20, 8); break; + case USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C: + /* + * Report descriptor format: + * 22: button bit count + * 30: padding bit count + * 24: button report size + * 16: button usage maximum + */ + mouse_button_fixup(hdev, rdesc, *rsize, 22, 30, 24, 16, 8); + break; } return rdesc; } @@ -112,7 +123,8 @@ static const struct hid_device_id elecom_devices[] = { { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) }, + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, { } }; MODULE_DEVICE_TABLE(hid, elecom_devices); diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index 0f8c11842a3a..9e36b4cd905e 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -413,6 +413,8 @@ #define I2C_DEVICE_ID_HP_ENVY_X360_15T_DR100 0x29CF #define I2C_DEVICE_ID_HP_ENVY_X360_EU0009NV 0x2CF9 #define I2C_DEVICE_ID_HP_SPECTRE_X360_15 0x2817 +#define I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG 0x29DF +#define I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN 0x2BC8 #define USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN 0x2544 #define USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN 0x2706 #define I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN 0x261A @@ -428,7 +430,8 @@ #define USB_DEVICE_ID_ELECOM_M_DT1URBK 0x00fe #define USB_DEVICE_ID_ELECOM_M_DT1DRBK 0x00ff #define USB_DEVICE_ID_ELECOM_M_HT1URBK 0x010c -#define USB_DEVICE_ID_ELECOM_M_HT1DRBK 0x010d +#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d +#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C 0x011c #define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34 #define USB_DEVICE_ID_DREAM_CHEEKY_WN 0x0004 diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c index 9b59e436df0a..77c8c49852b5 100644 --- a/drivers/hid/hid-input.c +++ b/drivers/hid/hid-input.c @@ -370,6 +370,8 @@ static const struct hid_device_id hid_battery_quirks[] = { { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_DINOVO_EDGE_KBD), HID_BATTERY_QUIRK_IGNORE }, + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_ASUS_TP420IA_TOUCHSCREEN), + HID_BATTERY_QUIRK_IGNORE }, { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550_TOUCHSCREEN), HID_BATTERY_QUIRK_IGNORE }, { HID_USB_DEVICE(USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ASUS_UX550VE_TOUCHSCREEN), @@ -384,6 +386,8 @@ static const struct hid_device_id hid_battery_quirks[] = { HID_BATTERY_QUIRK_IGNORE }, { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_15), HID_BATTERY_QUIRK_IGNORE }, + { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_HP_SPECTRE_X360_13_AW0020NG), + HID_BATTERY_QUIRK_IGNORE }, { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO_TOUCHSCREEN), HID_BATTERY_QUIRK_IGNORE }, { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, I2C_DEVICE_ID_SURFACE_GO2_TOUCHSCREEN), diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c index abf2c95e4d0b..9c1ee8e91e0c 100644 --- a/drivers/hid/hid-logitech-hidpp.c +++ b/drivers/hid/hid-logitech-hidpp.c @@ -3978,7 +3978,8 @@ static void hidpp_connect_event(struct hidpp_device *hidpp) } hidpp_initialize_battery(hidpp); - hidpp_initialize_hires_scroll(hidpp); + if (!hid_is_usb(hidpp->hid_dev)) + hidpp_initialize_hires_scroll(hidpp); /* forward current battery state */ if (hidpp->capabilities & HIDPP_CAPABILITY_HIDPP10_BATTERY) { diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c index be3ad02573de..5bc91f68b374 100644 --- a/drivers/hid/hid-quirks.c +++ b/drivers/hid/hid-quirks.c @@ -393,7 +393,8 @@ static const struct hid_device_id hid_have_special_driver[] = { { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1URBK) }, { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_DT1DRBK) }, { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) }, - { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK) }, + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) }, + { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) }, #endif #if IS_ENABLED(CONFIG_HID_ELO) { HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0009) }, diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index cbe43e2567a7..64ac5bdee3a6 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -1963,7 +1963,7 @@ static void hv_balloon_debugfs_init(struct hv_dynmem_device *b) static void hv_balloon_debugfs_exit(struct hv_dynmem_device *b) { - debugfs_remove(debugfs_lookup("hv-balloon", NULL)); + debugfs_lookup_and_remove("hv-balloon", NULL); } #else diff --git a/drivers/iio/accel/hid-sensor-accel-3d.c b/drivers/iio/accel/hid-sensor-accel-3d.c index a2def6f9380a..5eac7ea19993 100644 --- a/drivers/iio/accel/hid-sensor-accel-3d.c +++ b/drivers/iio/accel/hid-sensor-accel-3d.c @@ -280,6 +280,7 @@ static int accel_3d_capture_sample(struct hid_sensor_hub_device *hsdev, hid_sensor_convert_timestamp( &accel_state->common_attributes, *(int64_t *)raw_data); + ret = 0; break; default: break; diff --git a/drivers/iio/adc/berlin2-adc.c b/drivers/iio/adc/berlin2-adc.c index 3d2e8b4db61a..a4e7c7eff5ac 100644 --- a/drivers/iio/adc/berlin2-adc.c +++ b/drivers/iio/adc/berlin2-adc.c @@ -298,8 +298,10 @@ static int berlin2_adc_probe(struct platform_device *pdev) int ret; indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*priv)); - if (!indio_dev) + if (!indio_dev) { + of_node_put(parent_np); return -ENOMEM; + } priv = iio_priv(indio_dev); diff --git a/drivers/iio/adc/imx8qxp-adc.c b/drivers/iio/adc/imx8qxp-adc.c index 36777b827165..f5a0fc9e64c5 100644 --- a/drivers/iio/adc/imx8qxp-adc.c +++ b/drivers/iio/adc/imx8qxp-adc.c @@ -86,6 +86,8 @@ #define IMX8QXP_ADC_TIMEOUT msecs_to_jiffies(100) +#define IMX8QXP_ADC_MAX_FIFO_SIZE 16 + struct imx8qxp_adc { struct device *dev; void __iomem *regs; @@ -95,6 +97,7 @@ struct imx8qxp_adc { /* Serialise ADC channel reads */ struct mutex lock; struct completion completion; + u32 fifo[IMX8QXP_ADC_MAX_FIFO_SIZE]; }; #define IMX8QXP_ADC_CHAN(_idx) { \ @@ -238,8 +241,7 @@ static int imx8qxp_adc_read_raw(struct iio_dev *indio_dev, return ret; } - *val = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, - readl(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); + *val = adc->fifo[0]; mutex_unlock(&adc->lock); return IIO_VAL_INT; @@ -265,10 +267,15 @@ static irqreturn_t imx8qxp_adc_isr(int irq, void *dev_id) { struct imx8qxp_adc *adc = dev_id; u32 fifo_count; + int i; fifo_count = FIELD_GET(IMX8QXP_ADC_FCTRL_FCOUNT_MASK, readl(adc->regs + IMX8QXP_ADR_ADC_FCTRL)); + for (i = 0; i < fifo_count; i++) + adc->fifo[i] = FIELD_GET(IMX8QXP_ADC_RESFIFO_VAL_MASK, + readl_relaxed(adc->regs + IMX8QXP_ADR_ADC_RESFIFO)); + if (fifo_count) complete(&adc->completion); diff --git a/drivers/iio/adc/stm32-dfsdm-adc.c b/drivers/iio/adc/stm32-dfsdm-adc.c index 6d21ea84fa82..a428bdb567d5 100644 --- a/drivers/iio/adc/stm32-dfsdm-adc.c +++ b/drivers/iio/adc/stm32-dfsdm-adc.c @@ -1520,6 +1520,7 @@ static const struct of_device_id stm32_dfsdm_adc_match[] = { }, {} }; +MODULE_DEVICE_TABLE(of, stm32_dfsdm_adc_match); static int stm32_dfsdm_adc_probe(struct platform_device *pdev) { diff --git a/drivers/iio/adc/twl6030-gpadc.c b/drivers/iio/adc/twl6030-gpadc.c index f53e8558b560..32873fb5f367 100644 --- a/drivers/iio/adc/twl6030-gpadc.c +++ b/drivers/iio/adc/twl6030-gpadc.c @@ -57,6 +57,18 @@ #define TWL6030_GPADCS BIT(1) #define TWL6030_GPADCR BIT(0) +#define USB_VBUS_CTRL_SET 0x04 +#define USB_ID_CTRL_SET 0x06 + +#define TWL6030_MISC1 0xE4 +#define VBUS_MEAS 0x01 +#define ID_MEAS 0x01 + +#define VAC_MEAS 0x04 +#define VBAT_MEAS 0x02 +#define BB_MEAS 0x01 + + /** * struct twl6030_chnl_calib - channel calibration * @gain: slope coefficient for ideal curve @@ -927,6 +939,26 @@ static int twl6030_gpadc_probe(struct platform_device *pdev) return ret; } + ret = twl_i2c_write_u8(TWL_MODULE_USB, VBUS_MEAS, USB_VBUS_CTRL_SET); + if (ret < 0) { + dev_err(dev, "failed to wire up inputs\n"); + return ret; + } + + ret = twl_i2c_write_u8(TWL_MODULE_USB, ID_MEAS, USB_ID_CTRL_SET); + if (ret < 0) { + dev_err(dev, "failed to wire up inputs\n"); + return ret; + } + + ret = twl_i2c_write_u8(TWL6030_MODULE_ID0, + VBAT_MEAS | BB_MEAS | VAC_MEAS, + TWL6030_MISC1); + if (ret < 0) { + dev_err(dev, "failed to wire up inputs\n"); + return ret; + } + indio_dev->name = DRIVER_NAME; indio_dev->info = &twl6030_gpadc_iio_info; indio_dev->modes = INDIO_DIRECT_MODE; diff --git a/drivers/iio/adc/xilinx-ams.c b/drivers/iio/adc/xilinx-ams.c index 5b4bdf3a26bb..a507d2e17079 100644 --- a/drivers/iio/adc/xilinx-ams.c +++ b/drivers/iio/adc/xilinx-ams.c @@ -1329,7 +1329,7 @@ static int ams_parse_firmware(struct iio_dev *indio_dev) dev_channels = devm_krealloc(dev, ams_channels, dev_size, GFP_KERNEL); if (!dev_channels) - ret = -ENOMEM; + return -ENOMEM; indio_dev->channels = dev_channels; indio_dev->num_channels = num_channels; diff --git a/drivers/iio/gyro/hid-sensor-gyro-3d.c b/drivers/iio/gyro/hid-sensor-gyro-3d.c index 8f0ad022c7f1..698c50da1f10 100644 --- a/drivers/iio/gyro/hid-sensor-gyro-3d.c +++ b/drivers/iio/gyro/hid-sensor-gyro-3d.c @@ -231,6 +231,7 @@ static int gyro_3d_capture_sample(struct hid_sensor_hub_device *hsdev, gyro_state->timestamp = hid_sensor_convert_timestamp(&gyro_state->common_attributes, *(s64 *)raw_data); + ret = 0; break; default: break; diff --git a/drivers/iio/imu/fxos8700_core.c b/drivers/iio/imu/fxos8700_core.c index 423cfe526f2a..6d189c4b9ff9 100644 --- a/drivers/iio/imu/fxos8700_core.c +++ b/drivers/iio/imu/fxos8700_core.c @@ -10,6 +10,7 @@ #include <linux/regmap.h> #include <linux/acpi.h> #include <linux/bitops.h> +#include <linux/bitfield.h> #include <linux/iio/iio.h> #include <linux/iio/sysfs.h> @@ -144,9 +145,8 @@ #define FXOS8700_NVM_DATA_BNK0 0xa7 /* Bit definitions for FXOS8700_CTRL_REG1 */ -#define FXOS8700_CTRL_ODR_MSK 0x38 #define FXOS8700_CTRL_ODR_MAX 0x00 -#define FXOS8700_CTRL_ODR_MIN GENMASK(4, 3) +#define FXOS8700_CTRL_ODR_MSK GENMASK(5, 3) /* Bit definitions for FXOS8700_M_CTRL_REG1 */ #define FXOS8700_HMS_MASK GENMASK(1, 0) @@ -320,7 +320,7 @@ static enum fxos8700_sensor fxos8700_to_sensor(enum iio_chan_type iio_type) switch (iio_type) { case IIO_ACCEL: return FXOS8700_ACCEL; - case IIO_ANGL_VEL: + case IIO_MAGN: return FXOS8700_MAGN; default: return -EINVAL; @@ -345,15 +345,35 @@ static int fxos8700_set_active_mode(struct fxos8700_data *data, static int fxos8700_set_scale(struct fxos8700_data *data, enum fxos8700_sensor t, int uscale) { - int i; + int i, ret, val; + bool active_mode; static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); struct device *dev = regmap_get_device(data->regmap); if (t == FXOS8700_MAGN) { - dev_err(dev, "Magnetometer scale is locked at 1200uT\n"); + dev_err(dev, "Magnetometer scale is locked at 0.001Gs\n"); return -EINVAL; } + /* + * When device is in active mode, it failed to set an ACCEL + * full-scale range(2g/4g/8g) in FXOS8700_XYZ_DATA_CFG. + * This is not align with the datasheet, but it is a fxos8700 + * chip behavier. Set the device in standby mode before setting + * an ACCEL full-scale range. + */ + ret = regmap_read(data->regmap, FXOS8700_CTRL_REG1, &val); + if (ret) + return ret; + + active_mode = val & FXOS8700_ACTIVE; + if (active_mode) { + ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, + val & ~FXOS8700_ACTIVE); + if (ret) + return ret; + } + for (i = 0; i < scale_num; i++) if (fxos8700_accel_scale[i].uscale == uscale) break; @@ -361,8 +381,12 @@ static int fxos8700_set_scale(struct fxos8700_data *data, if (i == scale_num) return -EINVAL; - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, fxos8700_accel_scale[i].bits); + if (ret) + return ret; + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, + active_mode); } static int fxos8700_get_scale(struct fxos8700_data *data, @@ -372,7 +396,7 @@ static int fxos8700_get_scale(struct fxos8700_data *data, static const int scale_num = ARRAY_SIZE(fxos8700_accel_scale); if (t == FXOS8700_MAGN) { - *uscale = 1200; /* Magnetometer is locked at 1200uT */ + *uscale = 1000; /* Magnetometer is locked at 0.001Gs */ return 0; } @@ -394,22 +418,61 @@ static int fxos8700_get_data(struct fxos8700_data *data, int chan_type, int axis, int *val) { u8 base, reg; + s16 tmp; int ret; - enum fxos8700_sensor type = fxos8700_to_sensor(chan_type); - base = type ? FXOS8700_OUT_X_MSB : FXOS8700_M_OUT_X_MSB; + /* + * Different register base addresses varies with channel types. + * This bug hasn't been noticed before because using an enum is + * really hard to read. Use an a switch statement to take over that. + */ + switch (chan_type) { + case IIO_ACCEL: + base = FXOS8700_OUT_X_MSB; + break; + case IIO_MAGN: + base = FXOS8700_M_OUT_X_MSB; + break; + default: + return -EINVAL; + } /* Block read 6 bytes of device output registers to avoid data loss */ ret = regmap_bulk_read(data->regmap, base, data->buf, - FXOS8700_DATA_BUF_SIZE); + sizeof(data->buf)); if (ret) return ret; /* Convert axis to buffer index */ reg = axis - IIO_MOD_X; + /* + * Convert to native endianness. The accel data and magn data + * are signed, so a forced type conversion is needed. + */ + tmp = be16_to_cpu(data->buf[reg]); + + /* + * ACCEL output data registers contain the X-axis, Y-axis, and Z-axis + * 14-bit left-justified sample data and MAGN output data registers + * contain the X-axis, Y-axis, and Z-axis 16-bit sample data. Apply + * a signed 2 bits right shift to the readback raw data from ACCEL + * output data register and keep that from MAGN sensor as the origin. + * Value should be extended to 32 bit. + */ + switch (chan_type) { + case IIO_ACCEL: + tmp = tmp >> 2; + break; + case IIO_MAGN: + /* Nothing to do */ + break; + default: + return -EINVAL; + } + /* Convert to native endianness */ - *val = sign_extend32(be16_to_cpu(data->buf[reg]), 15); + *val = sign_extend32(tmp, 15); return 0; } @@ -445,10 +508,9 @@ static int fxos8700_set_odr(struct fxos8700_data *data, enum fxos8700_sensor t, if (i >= odr_num) return -EINVAL; - return regmap_update_bits(data->regmap, - FXOS8700_CTRL_REG1, - FXOS8700_CTRL_ODR_MSK + FXOS8700_ACTIVE, - fxos8700_odr[i].bits << 3 | active_mode); + val &= ~FXOS8700_CTRL_ODR_MSK; + val |= FIELD_PREP(FXOS8700_CTRL_ODR_MSK, fxos8700_odr[i].bits) | FXOS8700_ACTIVE; + return regmap_write(data->regmap, FXOS8700_CTRL_REG1, val); } static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t, @@ -461,7 +523,7 @@ static int fxos8700_get_odr(struct fxos8700_data *data, enum fxos8700_sensor t, if (ret) return ret; - val &= FXOS8700_CTRL_ODR_MSK; + val = FIELD_GET(FXOS8700_CTRL_ODR_MSK, val); for (i = 0; i < odr_num; i++) if (val == fxos8700_odr[i].bits) @@ -526,7 +588,7 @@ static IIO_CONST_ATTR(in_accel_sampling_frequency_available, static IIO_CONST_ATTR(in_magn_sampling_frequency_available, "1.5625 6.25 12.5 50 100 200 400 800"); static IIO_CONST_ATTR(in_accel_scale_available, "0.000244 0.000488 0.000976"); -static IIO_CONST_ATTR(in_magn_scale_available, "0.000001200"); +static IIO_CONST_ATTR(in_magn_scale_available, "0.001000"); static struct attribute *fxos8700_attrs[] = { &iio_const_attr_in_accel_sampling_frequency_available.dev_attr.attr, @@ -592,14 +654,19 @@ static int fxos8700_chip_init(struct fxos8700_data *data, bool use_spi) if (ret) return ret; - /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ - ret = regmap_write(data->regmap, FXOS8700_CTRL_REG1, - FXOS8700_CTRL_ODR_MAX | FXOS8700_ACTIVE); + /* + * Set max full-scale range (+/-8G) for ACCEL sensor in chip + * initialization then activate the device. + */ + ret = regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); if (ret) return ret; - /* Set for max full-scale range (+/-8G) */ - return regmap_write(data->regmap, FXOS8700_XYZ_DATA_CFG, MODE_8G); + /* Max ODR (800Hz individual or 400Hz hybrid), active mode */ + return regmap_update_bits(data->regmap, FXOS8700_CTRL_REG1, + FXOS8700_CTRL_ODR_MSK | FXOS8700_ACTIVE, + FIELD_PREP(FXOS8700_CTRL_ODR_MSK, FXOS8700_CTRL_ODR_MAX) | + FXOS8700_ACTIVE); } static void fxos8700_chip_uninit(void *data) diff --git a/drivers/iio/imu/st_lsm6dsx/Kconfig b/drivers/iio/imu/st_lsm6dsx/Kconfig index f6660847fb58..8c16cdacf2f2 100644 --- a/drivers/iio/imu/st_lsm6dsx/Kconfig +++ b/drivers/iio/imu/st_lsm6dsx/Kconfig @@ -4,6 +4,7 @@ config IIO_ST_LSM6DSX tristate "ST_LSM6DSx driver for STM 6-axis IMU MEMS sensors" depends on (I2C || SPI || I3C) select IIO_BUFFER + select IIO_TRIGGERED_BUFFER select IIO_KFIFO_BUF select IIO_ST_LSM6DSX_I2C if (I2C) select IIO_ST_LSM6DSX_SPI if (SPI_MASTER) diff --git a/drivers/iio/light/cm32181.c b/drivers/iio/light/cm32181.c index 001055d09750..b1674a5bfa36 100644 --- a/drivers/iio/light/cm32181.c +++ b/drivers/iio/light/cm32181.c @@ -440,6 +440,8 @@ static int cm32181_probe(struct i2c_client *client) if (!indio_dev) return -ENOMEM; + i2c_set_clientdata(client, indio_dev); + /* * Some ACPI systems list 2 I2C resources for the CM3218 sensor, the * SMBus Alert Response Address (ARA, 0x0c) and the actual I2C address. @@ -460,8 +462,6 @@ static int cm32181_probe(struct i2c_client *client) return PTR_ERR(client); } - i2c_set_clientdata(client, indio_dev); - cm32181 = iio_priv(indio_dev); cm32181->client = client; cm32181->dev = dev; @@ -490,7 +490,8 @@ static int cm32181_probe(struct i2c_client *client) static int cm32181_suspend(struct device *dev) { - struct i2c_client *client = to_i2c_client(dev); + struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); + struct i2c_client *client = cm32181->client; return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, CM32181_CMD_ALS_DISABLE); @@ -498,8 +499,8 @@ static int cm32181_suspend(struct device *dev) static int cm32181_resume(struct device *dev) { - struct i2c_client *client = to_i2c_client(dev); struct cm32181_chip *cm32181 = iio_priv(dev_get_drvdata(dev)); + struct i2c_client *client = cm32181->client; return i2c_smbus_write_word_data(client, CM32181_REG_ADDR_CMD, cm32181->conf_regs[CM32181_REG_ADDR_CMD]); diff --git a/drivers/net/bonding/bond_debugfs.c b/drivers/net/bonding/bond_debugfs.c index 4f9b4a18c74c..594094526648 100644 --- a/drivers/net/bonding/bond_debugfs.c +++ b/drivers/net/bonding/bond_debugfs.c @@ -76,7 +76,7 @@ void bond_debug_reregister(struct bonding *bond) d = debugfs_rename(bonding_debug_root, bond->debug_dir, bonding_debug_root, bond->dev->name); - if (d) { + if (!IS_ERR(d)) { bond->debug_dir = d; } else { netdev_warn(bond->dev, "failed to reregister, so just unregister old one\n"); diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c index 616b21c90d05..3a15015bc409 100644 --- a/drivers/net/dsa/mt7530.c +++ b/drivers/net/dsa/mt7530.c @@ -1302,14 +1302,26 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port) if (!priv->ports[port].pvid) mt7530_rmw(priv, MT7530_PVC_P(port), ACC_FRM_MASK, MT7530_VLAN_ACC_TAGGED); - } - /* Set the port as a user port which is to be able to recognize VID - * from incoming packets before fetching entry within the VLAN table. - */ - mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK, - VLAN_ATTR(MT7530_VLAN_USER) | - PVC_EG_TAG(MT7530_VLAN_EG_DISABLED)); + /* Set the port as a user port which is to be able to recognize + * VID from incoming packets before fetching entry within the + * VLAN table. + */ + mt7530_rmw(priv, MT7530_PVC_P(port), + VLAN_ATTR_MASK | PVC_EG_TAG_MASK, + VLAN_ATTR(MT7530_VLAN_USER) | + PVC_EG_TAG(MT7530_VLAN_EG_DISABLED)); + } else { + /* Also set CPU ports to the "user" VLAN port attribute, to + * allow VLAN classification, but keep the EG_TAG attribute as + * "consistent" (i.o.w. don't change its value) for packets + * received by the switch from the CPU, so that tagged packets + * are forwarded to user ports as tagged, and untagged as + * untagged. + */ + mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK, + VLAN_ATTR(MT7530_VLAN_USER)); + } } static void diff --git a/drivers/net/dsa/ocelot/Kconfig b/drivers/net/dsa/ocelot/Kconfig index 640725524d0c..eff0a7dfcd21 100644 --- a/drivers/net/dsa/ocelot/Kconfig +++ b/drivers/net/dsa/ocelot/Kconfig @@ -12,6 +12,7 @@ config NET_DSA_MSCC_OCELOT_EXT tristate "Ocelot External Ethernet switch support" depends on NET_DSA && SPI depends on NET_VENDOR_MICROSEMI + depends on PTP_1588_CLOCK_OPTIONAL select MDIO_MSCC_MIIM select MFD_OCELOT_CORE select MSCC_OCELOT_SWITCH_LIB diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c index e8ad5ea31aff..d3999db7c6a2 100644 --- a/drivers/net/ethernet/amazon/ena/ena_netdev.c +++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c @@ -597,7 +597,9 @@ static int ena_xdp_set(struct net_device *netdev, struct netdev_bpf *bpf) if (rc) return rc; } + xdp_features_set_redirect_target(netdev, false); } else if (old_bpf_prog) { + xdp_features_clear_redirect_target(netdev); rc = ena_destroy_and_free_all_xdp_queues(adapter); if (rc) return rc; @@ -4103,6 +4105,8 @@ static void ena_set_conf_feat_params(struct ena_adapter *adapter, /* Set offload features */ ena_set_dev_offloads(feat, netdev); + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + adapter->max_mtu = feat->dev_attr.max_mtu; netdev->max_mtu = adapter->max_mtu; netdev->min_mtu = ENA_MIN_MTU; diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c index 06508eebb585..d6d6d5d37ff3 100644 --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c @@ -384,6 +384,11 @@ void aq_nic_ndev_init(struct aq_nic_s *self) self->ndev->mtu = aq_nic_cfg->mtu - ETH_HLEN; self->ndev->max_mtu = aq_hw_caps->mtu - ETH_FCS_LEN - ETH_HLEN; + self->ndev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; } void aq_nic_set_tx_ring(struct aq_nic_s *self, unsigned int idx, diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index fd6b5862f74e..ab5dfb3a4081 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -13687,6 +13687,9 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) netif_set_tso_max_size(dev, GSO_MAX_SIZE); + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_RX_SG; + #ifdef CONFIG_BNXT_SRIOV init_waitqueue_head(&bp->sriov_cfg_wait); #endif diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c index 36d5202c0aee..5843c93b1711 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c @@ -422,9 +422,11 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog) if (prog) { bnxt_set_rx_skb_mode(bp, true); + xdp_features_set_redirect_target(dev, true); } else { int rx, tx; + xdp_features_clear_redirect_target(dev); bnxt_set_rx_skb_mode(bp, false); bnxt_get_max_rings(bp, &rx, &tx, true); if (rx > 1) { diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c index 41964fd02452..6e141a8bbf43 100644 --- a/drivers/net/ethernet/cadence/macb_main.c +++ b/drivers/net/ethernet/cadence/macb_main.c @@ -4684,25 +4684,26 @@ static int init_reset_optional(struct platform_device *pdev) if (ret) return dev_err_probe(&pdev->dev, ret, "failed to init SGMII PHY\n"); - } - ret = zynqmp_pm_is_function_supported(PM_IOCTL, IOCTL_SET_GEM_CONFIG); - if (!ret) { - u32 pm_info[2]; + ret = zynqmp_pm_is_function_supported(PM_IOCTL, IOCTL_SET_GEM_CONFIG); + if (!ret) { + u32 pm_info[2]; + + ret = of_property_read_u32_array(pdev->dev.of_node, "power-domains", + pm_info, ARRAY_SIZE(pm_info)); + if (ret) { + dev_err(&pdev->dev, "Failed to read power management information\n"); + goto err_out_phy_exit; + } + ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_FIXED, 0); + if (ret) + goto err_out_phy_exit; - ret = of_property_read_u32_array(pdev->dev.of_node, "power-domains", - pm_info, ARRAY_SIZE(pm_info)); - if (ret) { - dev_err(&pdev->dev, "Failed to read power management information\n"); - goto err_out_phy_exit; + ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_SGMII_MODE, 1); + if (ret) + goto err_out_phy_exit; } - ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_FIXED, 0); - if (ret) - goto err_out_phy_exit; - ret = zynqmp_pm_set_gem_config(pm_info[1], GEM_CONFIG_SGMII_MODE, 1); - if (ret) - goto err_out_phy_exit; } /* Fully reset controller at hardware level if mapped in device tree */ diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c index f2f95493ec89..8b25313c7f6b 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c +++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c @@ -2218,6 +2218,8 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netdev->netdev_ops = &nicvf_netdev_ops; netdev->watchdog_timeo = NICVF_TX_TIMEOUT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; + /* MTU range: 64 - 9200 */ netdev->min_mtu = NIC_HW_MIN_FRS; netdev->max_mtu = NIC_HW_MAX_FRS; diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c index c3cf427a9409..6982aaa928b5 100644 --- a/drivers/net/ethernet/engleder/tsnep_main.c +++ b/drivers/net/ethernet/engleder/tsnep_main.c @@ -1926,6 +1926,10 @@ static int tsnep_probe(struct platform_device *pdev) netdev->features = NETIF_F_SG; netdev->hw_features = netdev->features | NETIF_F_LOOPBACK; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG; + /* carrier off reporting is important to ethtool even BEFORE open */ netif_carrier_off(netdev); diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index 027fff9f7db0..9318a2554056 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -244,6 +244,10 @@ static int dpaa_netdev_init(struct net_device *net_dev, net_dev->features |= net_dev->hw_features; net_dev->vlan_features = net_dev->features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + if (is_valid_ether_addr(mac_addr)) { memcpy(net_dev->perm_addr, mac_addr, net_dev->addr_len); eth_hw_addr_set(net_dev, mac_addr); diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c index 2e79d18fc3c7..746ccfde7255 100644 --- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c @@ -4596,6 +4596,10 @@ static int dpaa2_eth_netdev_init(struct net_device *net_dev) NETIF_F_LLTX | NETIF_F_HW_TC | NETIF_F_TSO; net_dev->gso_max_segs = DPAA2_ETH_ENQUEUE_MAX_FDS; net_dev->hw_features = net_dev->features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_NDO_XMIT; if (priv->dpni_attrs.vlan_filter_entries) net_dev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER; diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c index 97056dc3496d..7cd22d370caa 100644 --- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c +++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c @@ -807,6 +807,9 @@ static void enetc_pf_netdev_setup(struct enetc_si *si, struct net_device *ndev, ndev->hw_features |= NETIF_F_RXHASH; ndev->priv_flags |= IFF_UNICAST_FLT; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; if (si->hw_features & ENETC_SI_F_PSFP && !enetc_psfp_enable(priv)) { priv->active_offloads |= ENETC_F_QCI; diff --git a/drivers/net/ethernet/fungible/funeth/funeth_main.c b/drivers/net/ethernet/fungible/funeth/funeth_main.c index b4cce30e526a..df86770731ad 100644 --- a/drivers/net/ethernet/fungible/funeth/funeth_main.c +++ b/drivers/net/ethernet/fungible/funeth/funeth_main.c @@ -1160,6 +1160,11 @@ static int fun_xdp_setup(struct net_device *dev, struct netdev_bpf *xdp) WRITE_ONCE(rxqs[i]->xdp_prog, prog); } + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + dev->max_mtu = prog ? XDP_MAX_MTU : FUN_MAX_MTU; old_prog = xchg(&fp->xdp_prog, prog); if (old_prog) @@ -1765,6 +1770,7 @@ static int fun_create_netdev(struct fun_ethdev *ed, unsigned int portid) netdev->vlan_features = netdev->features & VLAN_FEAT; netdev->mpls_features = netdev->vlan_features; netdev->hw_enc_features = netdev->hw_features; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = FUN_MAX_MTU; diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c index 2c66fd20f23c..3667ad6493d6 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_main.c +++ b/drivers/net/ethernet/intel/i40e/i40e_main.c @@ -13339,9 +13339,11 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog, old_prog = xchg(&vsi->xdp_prog, prog); if (need_reset) { - if (!prog) + if (!prog) { + xdp_features_clear_redirect_target(vsi->netdev); /* Wait until ndo_xsk_wakeup completes. */ synchronize_rcu(); + } i40e_reset_and_rebuild(pf, true, true); } @@ -13362,11 +13364,13 @@ static int i40e_xdp_setup(struct i40e_vsi *vsi, struct bpf_prog *prog, /* Kick start the NAPI context if there is an AF_XDP socket open * on that queue id. This so that receiving will start. */ - if (need_reset && prog) + if (need_reset && prog) { for (i = 0; i < vsi->num_queue_pairs; i++) if (vsi->xdp_rings[i]->xsk_pool) (void)i40e_xsk_wakeup(vsi->netdev, i, XDP_WAKEUP_RX); + xdp_features_set_redirect_target(vsi->netdev, true); + } return 0; } @@ -13783,6 +13787,8 @@ static int i40e_config_netdev(struct i40e_vsi *vsi) netdev->hw_enc_features |= NETIF_F_TSO_MANGLEID; netdev->features &= ~NETIF_F_HW_TC; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; if (vsi->type == I40E_VSI_MAIN) { SET_NETDEV_DEV(netdev, &pf->pdev->dev); diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c index 554095b25f44..1911d644dfa8 100644 --- a/drivers/net/ethernet/intel/ice/ice_base.c +++ b/drivers/net/ethernet/intel/ice/ice_base.c @@ -355,9 +355,6 @@ static unsigned int ice_rx_offset(struct ice_rx_ring *rx_ring) { if (ice_ring_uses_build_skb(rx_ring)) return ICE_SKB_PAD; - else if (ice_is_xdp_ena_vsi(rx_ring->vsi)) - return XDP_PACKET_HEADROOM; - return 0; } @@ -495,7 +492,7 @@ static int ice_setup_rx_ctx(struct ice_rx_ring *ring) int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) { struct device *dev = ice_pf_to_dev(ring->vsi->back); - u16 num_bufs = ICE_DESC_UNUSED(ring); + u32 num_bufs = ICE_RX_DESC_UNUSED(ring); int err; ring->rx_buf_len = ring->vsi->rx_buf_len; @@ -503,8 +500,10 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) if (ring->vsi->type == ICE_VSI_PF) { if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) /* coverity[check_return] */ - xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, - ring->q_index, ring->q_vector->napi.napi_id); + __xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev, + ring->q_index, + ring->q_vector->napi.napi_id, + ring->vsi->rx_buf_len); ring->xsk_pool = ice_xsk_pool(ring); if (ring->xsk_pool) { @@ -524,9 +523,11 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) } else { if (!xdp_rxq_info_is_reg(&ring->xdp_rxq)) /* coverity[check_return] */ - xdp_rxq_info_reg(&ring->xdp_rxq, - ring->netdev, - ring->q_index, ring->q_vector->napi.napi_id); + __xdp_rxq_info_reg(&ring->xdp_rxq, + ring->netdev, + ring->q_index, + ring->q_vector->napi.napi_id, + ring->vsi->rx_buf_len); err = xdp_rxq_info_reg_mem_model(&ring->xdp_rxq, MEM_TYPE_PAGE_SHARED, @@ -536,6 +537,8 @@ int ice_vsi_cfg_rxq(struct ice_rx_ring *ring) } } + xdp_init_buff(&ring->xdp, ice_rx_pg_size(ring) / 2, &ring->xdp_rxq); + ring->xdp.data = NULL; err = ice_setup_rx_ctx(ring); if (err) { dev_err(dev, "ice_setup_rx_ctx failed for RxQ %d, err %d\n", diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c index 75f8b8bca7e8..1b79bb0a4fcd 100644 --- a/drivers/net/ethernet/intel/ice/ice_common.c +++ b/drivers/net/ethernet/intel/ice/ice_common.c @@ -5534,7 +5534,7 @@ bool ice_fw_supports_report_dflt_cfg(struct ice_hw *hw) * returned by the firmware is a 16 bit * value, but is indexed * by [fls(speed) - 1] */ -static const u32 ice_aq_to_link_speed[15] = { +static const u32 ice_aq_to_link_speed[] = { SPEED_10, /* BIT(0) */ SPEED_100, SPEED_1000, @@ -5546,10 +5546,6 @@ static const u32 ice_aq_to_link_speed[15] = { SPEED_40000, SPEED_50000, SPEED_100000, /* BIT(10) */ - 0, - 0, - 0, - 0 /* BIT(14) */ }; /** @@ -5560,5 +5556,8 @@ static const u32 ice_aq_to_link_speed[15] = { */ u32 ice_get_link_speed(u16 index) { + if (index >= ARRAY_SIZE(ice_aq_to_link_speed)) + return 0; + return ice_aq_to_link_speed[index]; } diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c index 6f03e9fe331a..b360bd8f1599 100644 --- a/drivers/net/ethernet/intel/ice/ice_ethtool.c +++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c @@ -3046,8 +3046,6 @@ ice_set_ringparam(struct net_device *netdev, struct ethtool_ringparam *ring, /* clone ring and setup updated count */ xdp_rings[i] = *vsi->xdp_rings[i]; xdp_rings[i].count = new_tx_cnt; - xdp_rings[i].next_dd = ICE_RING_QUARTER(&xdp_rings[i]) - 1; - xdp_rings[i].next_rs = ICE_RING_QUARTER(&xdp_rings[i]) - 1; xdp_rings[i].desc = NULL; xdp_rings[i].tx_buf = NULL; err = ice_setup_tx_ring(&xdp_rings[i]); @@ -3092,7 +3090,7 @@ process_rx: /* allocate Rx buffers */ err = ice_alloc_rx_bufs(&rx_rings[i], - ICE_DESC_UNUSED(&rx_rings[i])); + ICE_RX_DESC_UNUSED(&rx_rings[i])); rx_unwind: if (err) { while (i) { diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c index 960197b2301c..37fe639712e6 100644 --- a/drivers/net/ethernet/intel/ice/ice_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_lib.c @@ -1961,8 +1961,8 @@ void ice_update_eth_stats(struct ice_vsi *vsi) void ice_vsi_cfg_frame_size(struct ice_vsi *vsi) { if (!vsi->netdev || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) { - vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX; - vsi->rx_buf_len = ICE_RXBUF_2048; + vsi->max_frame = ICE_MAX_FRAME_LEGACY_RX; + vsi->rx_buf_len = ICE_RXBUF_1664; #if (PAGE_SIZE < 8192) } else if (!ICE_2K_TOO_SMALL_WITH_PADDING && (vsi->netdev->mtu <= ETH_DATA_LEN)) { @@ -1971,11 +1971,7 @@ void ice_vsi_cfg_frame_size(struct ice_vsi *vsi) #endif } else { vsi->max_frame = ICE_AQ_SET_MAC_FRAME_SIZE_MAX; -#if (PAGE_SIZE < 8192) vsi->rx_buf_len = ICE_RXBUF_3072; -#else - vsi->rx_buf_len = ICE_RXBUF_2048; -#endif } } diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c index b71ff7e2ebf2..0712c1055aea 100644 --- a/drivers/net/ethernet/intel/ice/ice_main.c +++ b/drivers/net/ethernet/intel/ice/ice_main.c @@ -22,6 +22,7 @@ #include "ice_eswitch.h" #include "ice_tc_lib.h" #include "ice_vsi_vlan_ops.h" +#include <net/xdp_sock_drv.h> #define DRV_SUMMARY "Intel(R) Ethernet Connection E800 Series Linux Driver" static const char ice_driver_string[] = DRV_SUMMARY; @@ -2569,8 +2570,6 @@ static int ice_xdp_alloc_setup_rings(struct ice_vsi *vsi) xdp_ring->netdev = NULL; xdp_ring->dev = dev; xdp_ring->count = vsi->num_tx_desc; - xdp_ring->next_dd = ICE_RING_QUARTER(xdp_ring) - 1; - xdp_ring->next_rs = ICE_RING_QUARTER(xdp_ring) - 1; WRITE_ONCE(vsi->xdp_rings[i], xdp_ring); if (ice_setup_tx_ring(xdp_ring)) goto free_xdp_rings; @@ -2862,6 +2861,18 @@ int ice_vsi_determine_xdp_res(struct ice_vsi *vsi) } /** + * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP + * @vsi: Pointer to VSI structure + */ +static int ice_max_xdp_frame_size(struct ice_vsi *vsi) +{ + if (test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) + return ICE_RXBUF_1664; + else + return ICE_RXBUF_3072; +} + +/** * ice_xdp_setup_prog - Add or remove XDP eBPF program * @vsi: VSI to setup XDP for * @prog: XDP program @@ -2871,13 +2882,16 @@ static int ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog, struct netlink_ext_ack *extack) { - int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; + unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; bool if_running = netif_running(vsi->netdev); int ret = 0, xdp_ring_err = 0; - if (frame_size > vsi->rx_buf_len) { - NL_SET_ERR_MSG_MOD(extack, "MTU too large for loading XDP"); - return -EOPNOTSUPP; + if (prog && !prog->aux->xdp_has_frags) { + if (frame_size > ice_max_xdp_frame_size(vsi)) { + NL_SET_ERR_MSG_MOD(extack, + "MTU is too large for linear frames and XDP prog does not support frags"); + return -EOPNOTSUPP; + } } /* need to stop netdev while setting up the program for Rx rings */ @@ -2898,11 +2912,13 @@ ice_xdp_setup_prog(struct ice_vsi *vsi, struct bpf_prog *prog, if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Tx resources failed"); } + xdp_features_set_redirect_target(vsi->netdev, false); /* reallocate Rx queues that are used for zero-copy */ xdp_ring_err = ice_realloc_zc_buf(vsi, true); if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Setting up XDP Rx resources failed"); } else if (ice_is_xdp_ena_vsi(vsi) && !prog) { + xdp_features_clear_redirect_target(vsi->netdev); xdp_ring_err = ice_destroy_xdp_rings(vsi); if (xdp_ring_err) NL_SET_ERR_MSG_MOD(extack, "Freeing XDP Tx resources failed"); @@ -4552,6 +4568,8 @@ static int ice_cfg_netdev(struct ice_vsi *vsi) np->vsi = vsi; ice_set_netdev_features(netdev); + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; ice_set_ops(netdev); if (vsi->type == ICE_VSI_PF) { @@ -5724,7 +5742,7 @@ static int __init ice_module_init(void) pr_info("%s\n", ice_driver_string); pr_info("%s\n", ice_copyright); - ice_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, KBUILD_MODNAME); + ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME); if (!ice_wq) { pr_err("Failed to create workqueue\n"); return -ENOMEM; @@ -7514,18 +7532,6 @@ clear_recovery: } /** - * ice_max_xdp_frame_size - returns the maximum allowed frame size for XDP - * @vsi: Pointer to VSI structure - */ -static int ice_max_xdp_frame_size(struct ice_vsi *vsi) -{ - if (PAGE_SIZE >= 8192 || test_bit(ICE_FLAG_LEGACY_RX, vsi->back->flags)) - return ICE_RXBUF_2048 - XDP_PACKET_HEADROOM; - else - return ICE_RXBUF_3072; -} - -/** * ice_change_mtu - NDO callback to change the MTU * @netdev: network interface device structure * @new_mtu: new value for maximum frame size @@ -7537,6 +7543,7 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) struct ice_netdev_priv *np = netdev_priv(netdev); struct ice_vsi *vsi = np->vsi; struct ice_pf *pf = vsi->back; + struct bpf_prog *prog; u8 count = 0; int err = 0; @@ -7545,7 +7552,8 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) return 0; } - if (ice_is_xdp_ena_vsi(vsi)) { + prog = vsi->xdp_prog; + if (prog && !prog->aux->xdp_has_frags) { int frame_size = ice_max_xdp_frame_size(vsi); if (new_mtu + ICE_ETH_PKT_HDR_PAD > frame_size) { @@ -7553,6 +7561,12 @@ static int ice_change_mtu(struct net_device *netdev, int new_mtu) frame_size - ICE_ETH_PKT_HDR_PAD); return -EINVAL; } + } else if (test_bit(ICE_FLAG_LEGACY_RX, pf->flags)) { + if (new_mtu + ICE_ETH_PKT_HDR_PAD > ICE_MAX_FRAME_LEGACY_RX) { + netdev_err(netdev, "Too big MTU for legacy-rx; Max is %d\n", + ICE_MAX_FRAME_LEGACY_RX - ICE_ETH_PKT_HDR_PAD); + return -EINVAL; + } } /* if a reset is in progress, wait for some time for it to complete */ diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c index 9b762f7972ce..61f844d22512 100644 --- a/drivers/net/ethernet/intel/ice/ice_switch.c +++ b/drivers/net/ethernet/intel/ice/ice_switch.c @@ -5420,7 +5420,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, */ status = ice_add_special_words(rinfo, lkup_exts, ice_is_dvm_ena(hw)); if (status) - goto err_free_lkup_exts; + goto err_unroll; /* Group match words into recipes using preferred recipe grouping * criteria. diff --git a/drivers/net/ethernet/intel/ice/ice_tc_lib.c b/drivers/net/ethernet/intel/ice/ice_tc_lib.c index 80706f7330f4..6b48cbc049c6 100644 --- a/drivers/net/ethernet/intel/ice/ice_tc_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_tc_lib.c @@ -1688,7 +1688,7 @@ ice_tc_forward_to_queue(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr, struct ice_vsi *ch_vsi = NULL; u16 queue = act->rx_queue; - if (queue > vsi->num_rxq) { + if (queue >= vsi->num_rxq) { NL_SET_ERR_MSG_MOD(fltr->extack, "Unable to add filter because specified queue is invalid"); return -EINVAL; diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c index ccf09c957a1c..466113c86e6f 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx.c @@ -113,12 +113,16 @@ static void ice_unmap_and_free_tx_buf(struct ice_tx_ring *ring, struct ice_tx_buf *tx_buf) { if (tx_buf->skb) { - if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT) + if (tx_buf->tx_flags & ICE_TX_FLAGS_DUMMY_PKT) { devm_kfree(ring->dev, tx_buf->raw_buf); - else if (ice_ring_is_xdp(ring)) - page_frag_free(tx_buf->raw_buf); - else + } else if (ice_ring_is_xdp(ring)) { + if (ring->xsk_pool) + xsk_buff_free(tx_buf->xdp); + else + page_frag_free(tx_buf->raw_buf); + } else { dev_kfree_skb_any(tx_buf->skb); + } if (dma_unmap_len(tx_buf, len)) dma_unmap_single(ring->dev, dma_unmap_addr(tx_buf, dma), @@ -174,8 +178,6 @@ tx_skip_free: tx_ring->next_to_use = 0; tx_ring->next_to_clean = 0; - tx_ring->next_dd = ICE_RING_QUARTER(tx_ring) - 1; - tx_ring->next_rs = ICE_RING_QUARTER(tx_ring) - 1; if (!tx_ring->netdev) return; @@ -382,6 +384,7 @@ err: */ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) { + struct xdp_buff *xdp = &rx_ring->xdp; struct device *dev = rx_ring->dev; u32 size; u16 i; @@ -390,16 +393,16 @@ void ice_clean_rx_ring(struct ice_rx_ring *rx_ring) if (!rx_ring->rx_buf) return; - if (rx_ring->skb) { - dev_kfree_skb(rx_ring->skb); - rx_ring->skb = NULL; - } - if (rx_ring->xsk_pool) { ice_xsk_clean_rx_ring(rx_ring); goto rx_skip_free; } + if (xdp->data) { + xdp_return_buff(xdp); + xdp->data = NULL; + } + /* Free all the Rx ring sk_buffs */ for (i = 0; i < rx_ring->count; i++) { struct ice_rx_buf *rx_buf = &rx_ring->rx_buf[i]; @@ -437,6 +440,7 @@ rx_skip_free: rx_ring->next_to_alloc = 0; rx_ring->next_to_clean = 0; + rx_ring->first_desc = 0; rx_ring->next_to_use = 0; } @@ -506,6 +510,7 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring) rx_ring->next_to_use = 0; rx_ring->next_to_clean = 0; + rx_ring->first_desc = 0; if (ice_is_xdp_ena_vsi(rx_ring->vsi)) WRITE_ONCE(rx_ring->xdp_prog, rx_ring->vsi->xdp_prog); @@ -523,8 +528,16 @@ err: return -ENOMEM; } +/** + * ice_rx_frame_truesize + * @rx_ring: ptr to Rx ring + * @size: size + * + * calculate the truesize with taking into the account PAGE_SIZE of + * underlying arch + */ static unsigned int -ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused size) +ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, const unsigned int size) { unsigned int truesize; @@ -545,34 +558,39 @@ ice_rx_frame_truesize(struct ice_rx_ring *rx_ring, unsigned int __maybe_unused s * @xdp: xdp_buff used as input to the XDP program * @xdp_prog: XDP program to run * @xdp_ring: ring to be used for XDP_TX action + * @rx_buf: Rx buffer to store the XDP action * * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR} */ -static int +static void ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring) + struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring, + struct ice_rx_buf *rx_buf) { - int err; + unsigned int ret = ICE_XDP_PASS; u32 act; + if (!xdp_prog) + goto exit; + act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: - return ICE_XDP_PASS; + break; case XDP_TX: if (static_branch_unlikely(&ice_xdp_locking_key)) spin_lock(&xdp_ring->tx_lock); - err = ice_xmit_xdp_ring(xdp->data, xdp->data_end - xdp->data, xdp_ring); + ret = __ice_xmit_xdp_ring(xdp, xdp_ring); if (static_branch_unlikely(&ice_xdp_locking_key)) spin_unlock(&xdp_ring->tx_lock); - if (err == ICE_XDP_CONSUMED) + if (ret == ICE_XDP_CONSUMED) goto out_failure; - return err; + break; case XDP_REDIRECT: - err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog); - if (err) + if (xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog)) goto out_failure; - return ICE_XDP_REDIR; + ret = ICE_XDP_REDIR; + break; default: bpf_warn_invalid_xdp_action(rx_ring->netdev, xdp_prog, act); fallthrough; @@ -581,8 +599,12 @@ out_failure: trace_xdp_exception(rx_ring->netdev, xdp_prog, act); fallthrough; case XDP_DROP: - return ICE_XDP_CONSUMED; + ret = ICE_XDP_CONSUMED; } +exit: + rx_buf->act = ret; + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, ret); } /** @@ -605,6 +627,7 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, unsigned int queue_index = smp_processor_id(); struct ice_vsi *vsi = np->vsi; struct ice_tx_ring *xdp_ring; + struct ice_tx_buf *tx_buf; int nxmit = 0, i; if (test_bit(ICE_VSI_DOWN, vsi->state)) @@ -627,16 +650,18 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, xdp_ring = vsi->xdp_rings[queue_index]; } + tx_buf = &xdp_ring->tx_buf[xdp_ring->next_to_use]; for (i = 0; i < n; i++) { struct xdp_frame *xdpf = frames[i]; int err; - err = ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring); + err = ice_xmit_xdp_ring(xdpf, xdp_ring); if (err != ICE_XDP_TX) break; nxmit++; } + tx_buf->rs_idx = ice_set_rs_bit(xdp_ring); if (unlikely(flags & XDP_XMIT_FLUSH)) ice_xdp_ring_update_tail(xdp_ring); @@ -706,7 +731,7 @@ ice_alloc_mapped_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *bi) * buffers. Then bump tail at most one time. Grouping like this lets us avoid * multiple tail writes per call. */ -bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, u16 cleaned_count) +bool ice_alloc_rx_bufs(struct ice_rx_ring *rx_ring, unsigned int cleaned_count) { union ice_32b_rx_flex_desc *rx_desc; u16 ntu = rx_ring->next_to_use; @@ -783,7 +808,6 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size) /** * ice_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buf: buffer containing the page - * @rx_buf_pgcnt: rx_buf page refcount pre xdp_do_redirect() call * * If page is reusable, we have a green light for calling ice_reuse_rx_page, * which will assign the current buffer to the buffer that next_to_alloc is @@ -791,7 +815,7 @@ ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size) * page freed */ static bool -ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) +ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf) { unsigned int pagecnt_bias = rx_buf->pagecnt_bias; struct page *page = rx_buf->page; @@ -802,7 +826,7 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) #if (PAGE_SIZE < 8192) /* if we are only owner of page we can reuse it */ - if (unlikely((rx_buf_pgcnt - pagecnt_bias) > 1)) + if (unlikely(rx_buf->pgcnt - pagecnt_bias > 1)) return false; #else #define ICE_LAST_OFFSET \ @@ -824,33 +848,44 @@ ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf, int rx_buf_pgcnt) } /** - * ice_add_rx_frag - Add contents of Rx buffer to sk_buff as a frag + * ice_add_xdp_frag - Add contents of Rx buffer to xdp buf as a frag * @rx_ring: Rx descriptor ring to transact packets on + * @xdp: xdp buff to place the data into * @rx_buf: buffer containing page to add - * @skb: sk_buff to place the data into * @size: packet length from rx_desc * - * This function will add the data contained in rx_buf->page to the skb. - * It will just attach the page as a frag to the skb. - * The function will then update the page offset. + * This function will add the data contained in rx_buf->page to the xdp buf. + * It will just attach the page as a frag. */ -static void -ice_add_rx_frag(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct sk_buff *skb, unsigned int size) +static int +ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, + struct ice_rx_buf *rx_buf, const unsigned int size) { -#if (PAGE_SIZE >= 8192) - unsigned int truesize = SKB_DATA_ALIGN(size + rx_ring->rx_offset); -#else - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#endif + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); if (!size) - return; - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->page_offset, size, truesize); + return 0; + + if (!xdp_buff_has_frags(xdp)) { + sinfo->nr_frags = 0; + sinfo->xdp_frags_size = 0; + xdp_buff_set_frags_flag(xdp); + } + + if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) { + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED); + return -ENOMEM; + } + + __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page, + rx_buf->page_offset, size); + sinfo->xdp_frags_size += size; - /* page is being used so we must update the page offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + if (page_is_pfmemalloc(rx_buf->page)) + xdp_buff_set_frag_pfmemalloc(xdp); + + return 0; } /** @@ -886,19 +921,18 @@ ice_reuse_rx_page(struct ice_rx_ring *rx_ring, struct ice_rx_buf *old_buf) * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use * @rx_ring: Rx descriptor ring to transact packets on * @size: size of buffer to add to skb - * @rx_buf_pgcnt: rx_buf page refcount * * This function will pull an Rx buffer from the ring and synchronize it * for use by the CPU. */ static struct ice_rx_buf * ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size, - int *rx_buf_pgcnt) + const unsigned int ntc) { struct ice_rx_buf *rx_buf; - rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean]; - *rx_buf_pgcnt = + rx_buf = &rx_ring->rx_buf[ntc]; + rx_buf->pgcnt = #if (PAGE_SIZE < 8192) page_count(rx_buf->page); #else @@ -922,26 +956,25 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size, /** * ice_build_skb - Build skb around an existing buffer * @rx_ring: Rx descriptor ring to transact packets on - * @rx_buf: Rx buffer to pull data from * @xdp: xdp_buff pointing to the data * - * This function builds an skb around an existing Rx buffer, taking care - * to set up the skb correctly and avoid any memcpy overhead. + * This function builds an skb around an existing XDP buffer, taking care + * to set up the skb correctly and avoid any memcpy overhead. Driver has + * already combined frags (if any) to skb_shared_info. */ static struct sk_buff * -ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct xdp_buff *xdp) +ice_build_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) { u8 metasize = xdp->data - xdp->data_meta; -#if (PAGE_SIZE < 8192) - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#else - unsigned int truesize = SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) + - SKB_DATA_ALIGN(xdp->data_end - - xdp->data_hard_start); -#endif + struct skb_shared_info *sinfo = NULL; + unsigned int nr_frags; struct sk_buff *skb; + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + } + /* Prefetch first cache line of first page. If xdp->data_meta * is unused, this points exactly as xdp->data, otherwise we * likely have a consumer accessing first few bytes of meta @@ -949,7 +982,7 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, */ net_prefetch(xdp->data_meta); /* build an skb around the page buffer */ - skb = napi_build_skb(xdp->data_hard_start, truesize); + skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz); if (unlikely(!skb)) return NULL; @@ -964,8 +997,11 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, if (metasize) skb_metadata_set(skb, metasize); - /* buffer is used by skb, update page_offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + if (unlikely(xdp_buff_has_frags(xdp))) + xdp_update_skb_shared_info(skb, nr_frags, + sinfo->xdp_frags_size, + nr_frags * xdp->frame_sz, + xdp_buff_is_frag_pfmemalloc(xdp)); return skb; } @@ -981,24 +1017,30 @@ ice_build_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, * skb correctly. */ static struct sk_buff * -ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - struct xdp_buff *xdp) +ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) { - unsigned int metasize = xdp->data - xdp->data_meta; unsigned int size = xdp->data_end - xdp->data; + struct skb_shared_info *sinfo = NULL; + struct ice_rx_buf *rx_buf; + unsigned int nr_frags = 0; unsigned int headlen; struct sk_buff *skb; /* prefetch first cache line of first page */ - net_prefetch(xdp->data_meta); + net_prefetch(xdp->data); + + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + } /* allocate a skb to store the frags */ - skb = __napi_alloc_skb(&rx_ring->q_vector->napi, - ICE_RX_HDR_SIZE + metasize, + skb = __napi_alloc_skb(&rx_ring->q_vector->napi, ICE_RX_HDR_SIZE, GFP_ATOMIC | __GFP_NOWARN); if (unlikely(!skb)) return NULL; + rx_buf = &rx_ring->rx_buf[rx_ring->first_desc]; skb_record_rx_queue(skb, rx_ring->q_index); /* Determine available headroom for copy */ headlen = size; @@ -1006,32 +1048,42 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, headlen = eth_get_headlen(skb->dev, xdp->data, ICE_RX_HDR_SIZE); /* align pull length to size of long to optimize memcpy performance */ - memcpy(__skb_put(skb, headlen + metasize), xdp->data_meta, - ALIGN(headlen + metasize, sizeof(long))); - - if (metasize) { - skb_metadata_set(skb, metasize); - __skb_pull(skb, metasize); - } + memcpy(__skb_put(skb, headlen), xdp->data, ALIGN(headlen, + sizeof(long))); /* if we exhaust the linear part then add what is left as a frag */ size -= headlen; if (size) { -#if (PAGE_SIZE >= 8192) - unsigned int truesize = SKB_DATA_ALIGN(size); -#else - unsigned int truesize = ice_rx_pg_size(rx_ring) / 2; -#endif + /* besides adding here a partial frag, we are going to add + * frags from xdp_buff, make sure there is enough space for + * them + */ + if (unlikely(nr_frags >= MAX_SKB_FRAGS - 1)) { + dev_kfree_skb(skb); + return NULL; + } skb_add_rx_frag(skb, 0, rx_buf->page, - rx_buf->page_offset + headlen, size, truesize); - /* buffer is used by skb, update page_offset */ - ice_rx_buf_adjust_pg_offset(rx_buf, truesize); + rx_buf->page_offset + headlen, size, + xdp->frame_sz); } else { - /* buffer is unused, reset bias back to rx_buf; data was copied - * onto skb's linear part so there's no need for adjusting - * page offset and we can reuse this buffer as-is + /* buffer is unused, change the act that should be taken later + * on; data was copied onto skb's linear part so there's no + * need for adjusting page offset and we can reuse this buffer + * as-is */ - rx_buf->pagecnt_bias++; + rx_buf->act = ICE_SKB_CONSUMED; + } + + if (unlikely(xdp_buff_has_frags(xdp))) { + struct skb_shared_info *skinfo = skb_shinfo(skb); + + memcpy(&skinfo->frags[skinfo->nr_frags], &sinfo->frags[0], + sizeof(skb_frag_t) * nr_frags); + + xdp_update_skb_shared_info(skb, skinfo->nr_frags + nr_frags, + sinfo->xdp_frags_size, + nr_frags * xdp->frame_sz, + xdp_buff_is_frag_pfmemalloc(xdp)); } return skb; @@ -1041,26 +1093,17 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, * ice_put_rx_buf - Clean up used buffer and either recycle or free * @rx_ring: Rx descriptor ring to transact packets on * @rx_buf: Rx buffer to pull data from - * @rx_buf_pgcnt: Rx buffer page count pre xdp_do_redirect() * - * This function will update next_to_clean and then clean up the contents - * of the rx_buf. It will either recycle the buffer or unmap it and free - * the associated resources. + * This function will clean up the contents of the rx_buf. It will either + * recycle the buffer or unmap it and free the associated resources. */ static void -ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, - int rx_buf_pgcnt) +ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf) { - u16 ntc = rx_ring->next_to_clean + 1; - - /* fetch, update, and store next to clean */ - ntc = (ntc < rx_ring->count) ? ntc : 0; - rx_ring->next_to_clean = ntc; - if (!rx_buf) return; - if (ice_can_reuse_rx_page(rx_buf, rx_buf_pgcnt)) { + if (ice_can_reuse_rx_page(rx_buf)) { /* hand second half of page back to the ring */ ice_reuse_rx_page(rx_ring, rx_buf); } else { @@ -1076,27 +1119,6 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf, } /** - * ice_is_non_eop - process handling of non-EOP buffers - * @rx_ring: Rx ring being processed - * @rx_desc: Rx descriptor for current buffer - * - * If the buffer is an EOP buffer, this function exits returning false, - * otherwise return true indicating that this is in fact a non-EOP buffer. - */ -static bool -ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) -{ - /* if we are the last buffer then there is nothing else to do */ -#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) - if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) - return false; - - rx_ring->ring_stats->rx_stats.non_eop_descs++; - - return true; -} - -/** * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf * @rx_ring: Rx descriptor ring to transact packets on * @budget: Total limit on number of packets to process @@ -1110,39 +1132,42 @@ ice_is_non_eop(struct ice_rx_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc) */ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) { - unsigned int total_rx_bytes = 0, total_rx_pkts = 0, frame_sz = 0; - u16 cleaned_count = ICE_DESC_UNUSED(rx_ring); + unsigned int total_rx_bytes = 0, total_rx_pkts = 0; unsigned int offset = rx_ring->rx_offset; + struct xdp_buff *xdp = &rx_ring->xdp; struct ice_tx_ring *xdp_ring = NULL; - unsigned int xdp_res, xdp_xmit = 0; - struct sk_buff *skb = rx_ring->skb; struct bpf_prog *xdp_prog = NULL; - struct xdp_buff xdp; + u32 ntc = rx_ring->next_to_clean; + u32 cnt = rx_ring->count; + u32 cached_ntc = ntc; + u32 xdp_xmit = 0; + u32 cached_ntu; bool failure; + u32 first; /* Frame size depend on rx_ring setup when PAGE_SIZE=4K */ #if (PAGE_SIZE < 8192) - frame_sz = ice_rx_frame_truesize(rx_ring, 0); + xdp->frame_sz = ice_rx_frame_truesize(rx_ring, 0); #endif - xdp_init_buff(&xdp, frame_sz, &rx_ring->xdp_rxq); xdp_prog = READ_ONCE(rx_ring->xdp_prog); - if (xdp_prog) + if (xdp_prog) { xdp_ring = rx_ring->xdp_ring; + cached_ntu = xdp_ring->next_to_use; + } /* start the loop to process Rx packets bounded by 'budget' */ while (likely(total_rx_pkts < (unsigned int)budget)) { union ice_32b_rx_flex_desc *rx_desc; struct ice_rx_buf *rx_buf; - unsigned char *hard_start; + struct sk_buff *skb; unsigned int size; u16 stat_err_bits; - int rx_buf_pgcnt; u16 vlan_tag = 0; u16 rx_ptype; /* get the Rx desc from Rx ring based on 'next_to_clean' */ - rx_desc = ICE_RX_DESC(rx_ring, rx_ring->next_to_clean); + rx_desc = ICE_RX_DESC(rx_ring, ntc); /* status_error_len will always be zero for unused descriptors * because it's cleared in cleanup, and overlaps with hdr_addr @@ -1166,8 +1191,8 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) if (rx_desc->wb.rxdid == FDIR_DESC_RXDID && ctrl_vsi->vf) ice_vc_fdir_irq_handler(ctrl_vsi, rx_desc); - ice_put_rx_buf(rx_ring, NULL, 0); - cleaned_count++; + if (++ntc == cnt) + ntc = 0; continue; } @@ -1175,65 +1200,56 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget) ICE_RX_FLX_DESC_PKT_LEN_M; /* retrieve a buffer from the ring */ - rx_buf = ice_get_rx_buf(rx_ring, size, &rx_buf_pgcnt); + rx_buf = ice_get_rx_buf(rx_ring, size, ntc); - if (!size) { - xdp.data = NULL; - xdp.data_end = NULL; - xdp.data_hard_start = NULL; - xdp.data_meta = NULL; - goto construct_skb; - } + if (!xdp->data) { + void *hard_start; - hard_start = page_address(rx_buf->page) + rx_buf->page_offset - - offset; - xdp_prepare_buff(&xdp, hard_start, offset, size, true); + hard_start = page_address(rx_buf->page) + rx_buf->page_offset - + offset; + xdp_prepare_buff(xdp, hard_start, offset, size, !!offset); #if (PAGE_SIZE > 4096) - /* At larger PAGE_SIZE, frame_sz depend on len size */ - xdp.frame_sz = ice_rx_frame_truesize(rx_ring, size); + /* At larger PAGE_SIZE, frame_sz depend on len size */ + xdp->frame_sz = ice_rx_frame_truesize(rx_ring, size); #endif + xdp_buff_clear_frags_flag(xdp); + } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) { + break; + } + if (++ntc == cnt) + ntc = 0; - if (!xdp_prog) - goto construct_skb; + /* skip if it is NOP desc */ + if (ice_is_non_eop(rx_ring, rx_desc)) + continue; - xdp_res = ice_run_xdp(rx_ring, &xdp, xdp_prog, xdp_ring); - if (!xdp_res) + ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf); + if (rx_buf->act == ICE_XDP_PASS) goto construct_skb; - if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR)) { - xdp_xmit |= xdp_res; - ice_rx_buf_adjust_pg_offset(rx_buf, xdp.frame_sz); - } else { - rx_buf->pagecnt_bias++; - } - total_rx_bytes += size; + total_rx_bytes += xdp_get_buff_len(xdp); total_rx_pkts++; - cleaned_count++; - ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); + xdp->data = NULL; + rx_ring->first_desc = ntc; continue; construct_skb: - if (skb) { - ice_add_rx_frag(rx_ring, rx_buf, skb, size); - } else if (likely(xdp.data)) { - if (ice_ring_uses_build_skb(rx_ring)) - skb = ice_build_skb(rx_ring, rx_buf, &xdp); - else - skb = ice_construct_skb(rx_ring, rx_buf, &xdp); - } + if (likely(ice_ring_uses_build_skb(rx_ring))) + skb = ice_build_skb(rx_ring, xdp); + else + skb = ice_construct_skb(rx_ring, xdp); /* exit if we failed to retrieve a buffer */ if (!skb) { - rx_ring->ring_stats->rx_stats.alloc_buf_failed++; - if (rx_buf) - rx_buf->pagecnt_bias++; + rx_ring->ring_stats->rx_stats.alloc_page_failed++; + rx_buf->act = ICE_XDP_CONSUMED; + if (unlikely(xdp_buff_has_frags(xdp))) + ice_set_rx_bufs_act(xdp, rx_ring, + ICE_XDP_CONSUMED); + xdp->data = NULL; + rx_ring->first_desc = ntc; break; } - - ice_put_rx_buf(rx_ring, rx_buf, rx_buf_pgcnt); - cleaned_count++; - - /* skip if it is NOP desc */ - if (ice_is_non_eop(rx_ring, rx_desc)) - continue; + xdp->data = NULL; + rx_ring->first_desc = ntc; stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S); if (unlikely(ice_test_staterr(rx_desc->wb.status_error0, @@ -1245,10 +1261,8 @@ construct_skb: vlan_tag = ice_get_vlan_tag_from_rx_desc(rx_desc); /* pad the skb if needed, to make a valid ethernet frame */ - if (eth_skb_pad(skb)) { - skb = NULL; + if (eth_skb_pad(skb)) continue; - } /* probably a little skewed due to removing CRC */ total_rx_bytes += skb->len; @@ -1262,18 +1276,34 @@ construct_skb: ice_trace(clean_rx_irq_indicate, rx_ring, rx_desc, skb); /* send completed skb up the stack */ ice_receive_skb(rx_ring, skb, vlan_tag); - skb = NULL; /* update budget accounting */ total_rx_pkts++; } + first = rx_ring->first_desc; + while (cached_ntc != first) { + struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc]; + + if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) { + ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); + xdp_xmit |= buf->act; + } else if (buf->act & ICE_XDP_CONSUMED) { + buf->pagecnt_bias++; + } else if (buf->act == ICE_XDP_PASS) { + ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz); + } + + ice_put_rx_buf(rx_ring, buf); + if (++cached_ntc >= cnt) + cached_ntc = 0; + } + rx_ring->next_to_clean = ntc; /* return up to cleaned_count buffers to hardware */ - failure = ice_alloc_rx_bufs(rx_ring, cleaned_count); + failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring)); - if (xdp_prog) - ice_finalize_xdp_rx(xdp_ring, xdp_xmit); - rx_ring->skb = skb; + if (xdp_xmit) + ice_finalize_xdp_rx(xdp_ring, xdp_xmit, cached_ntu); if (rx_ring->ring_stats) ice_update_rx_ring_stats(rx_ring, total_rx_pkts, diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h index 4fd0e5d0a313..efa3d378f19e 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx.h @@ -9,10 +9,12 @@ #define ICE_DFLT_IRQ_WORK 256 #define ICE_RXBUF_3072 3072 #define ICE_RXBUF_2048 2048 +#define ICE_RXBUF_1664 1664 #define ICE_RXBUF_1536 1536 #define ICE_MAX_CHAINED_RX_BUFS 5 #define ICE_MAX_BUF_TXD 8 #define ICE_MIN_TX_LEN 17 +#define ICE_MAX_FRAME_LEGACY_RX 8320 /* The size limit for a transmit buffer in a descriptor is (16K - 1). * In order to align with the read requests we will align the value to @@ -110,6 +112,10 @@ static inline int ice_skb_pad(void) (u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->count) + \ (R)->next_to_clean - (R)->next_to_use - 1) +#define ICE_RX_DESC_UNUSED(R) \ + ((((R)->first_desc > (R)->next_to_use) ? 0 : (R)->count) + \ + (R)->first_desc - (R)->next_to_use - 1) + #define ICE_RING_QUARTER(R) ((R)->count >> 2) #define ICE_TX_FLAGS_TSO BIT(0) @@ -134,6 +140,7 @@ static inline int ice_skb_pad(void) #define ICE_XDP_TX BIT(1) #define ICE_XDP_REDIR BIT(2) #define ICE_XDP_EXIT BIT(3) +#define ICE_SKB_CONSUMED ICE_XDP_CONSUMED #define ICE_RX_DMA_ATTR \ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) @@ -143,13 +150,20 @@ static inline int ice_skb_pad(void) #define ICE_TXD_LAST_DESC_CMD (ICE_TX_DESC_CMD_EOP | ICE_TX_DESC_CMD_RS) struct ice_tx_buf { - struct ice_tx_desc *next_to_watch; + union { + struct ice_tx_desc *next_to_watch; + u32 rs_idx; + }; union { struct sk_buff *skb; void *raw_buf; /* used for XDP */ + struct xdp_buff *xdp; /* used for XDP_TX ZC */ }; unsigned int bytecount; - unsigned short gso_segs; + union { + unsigned int gso_segs; + unsigned int nr_frags; /* used for mbuf XDP */ + }; u32 tx_flags; DEFINE_DMA_UNMAP_LEN(len); DEFINE_DMA_UNMAP_ADDR(dma); @@ -170,7 +184,9 @@ struct ice_rx_buf { dma_addr_t dma; struct page *page; unsigned int page_offset; - u16 pagecnt_bias; + unsigned int pgcnt; + unsigned int act; + unsigned int pagecnt_bias; }; struct ice_q_stats { @@ -273,42 +289,44 @@ struct ice_rx_ring { struct ice_vsi *vsi; /* Backreference to associated VSI */ struct ice_q_vector *q_vector; /* Backreference to associated vector */ u8 __iomem *tail; + u16 q_index; /* Queue number of ring */ + + u16 count; /* Number of descriptors */ + u16 reg_idx; /* HW register index of the ring */ + u16 next_to_alloc; + /* CL2 - 2nd cacheline starts here */ union { struct ice_rx_buf *rx_buf; struct xdp_buff **xdp_buf; }; - /* CL2 - 2nd cacheline starts here */ - struct xdp_rxq_info xdp_rxq; + struct xdp_buff xdp; /* CL3 - 3rd cacheline starts here */ - u16 q_index; /* Queue number of ring */ - - u16 count; /* Number of descriptors */ - u16 reg_idx; /* HW register index of the ring */ + struct bpf_prog *xdp_prog; + u16 rx_offset; /* used in interrupt processing */ u16 next_to_use; u16 next_to_clean; - u16 next_to_alloc; - u16 rx_offset; - u16 rx_buf_len; + u16 first_desc; /* stats structs */ struct ice_ring_stats *ring_stats; struct rcu_head rcu; /* to avoid race on free */ - /* CL4 - 3rd cacheline starts here */ + /* CL4 - 4th cacheline starts here */ struct ice_channel *ch; - struct bpf_prog *xdp_prog; struct ice_tx_ring *xdp_ring; struct xsk_buff_pool *xsk_pool; - struct sk_buff *skb; dma_addr_t dma; /* physical address of ring */ u64 cached_phctime; + u16 rx_buf_len; u8 dcb_tc; /* Traffic class of ring */ u8 ptp_rx; #define ICE_RX_FLAGS_RING_BUILD_SKB BIT(1) #define ICE_RX_FLAGS_CRC_STRIP_DIS BIT(2) u8 flags; + /* CL5 - 5th cacheline starts here */ + struct xdp_rxq_info xdp_rxq; } ____cacheline_internodealigned_in_smp; struct ice_tx_ring { @@ -326,12 +344,11 @@ struct ice_tx_ring { struct xsk_buff_pool *xsk_pool; u16 next_to_use; u16 next_to_clean; - u16 next_rs; - u16 next_dd; u16 q_handle; /* Queue handle per TC */ u16 reg_idx; /* HW register index of the ring */ u16 count; /* Number of descriptors */ u16 q_index; /* Queue number of ring */ + u16 xdp_tx_active; /* stats structs */ struct ice_ring_stats *ring_stats; /* CL3 - 3rd cacheline starts here */ @@ -342,7 +359,6 @@ struct ice_tx_ring { spinlock_t tx_lock; u32 txq_teid; /* Added Tx queue TEID */ /* CL4 - 4th cacheline starts here */ - u16 xdp_tx_active; #define ICE_TX_FLAGS_RING_XDP BIT(0) #define ICE_TX_FLAGS_RING_VLAN_L2TAG1 BIT(1) #define ICE_TX_FLAGS_RING_VLAN_L2TAG2 BIT(2) @@ -431,7 +447,7 @@ static inline unsigned int ice_rx_pg_order(struct ice_rx_ring *ring) union ice_32b_rx_flex_desc; -bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, u16 cleaned_count); +bool ice_alloc_rx_bufs(struct ice_rx_ring *rxr, unsigned int cleaned_count); netdev_tx_t ice_start_xmit(struct sk_buff *skb, struct net_device *netdev); u16 ice_select_queue(struct net_device *dev, struct sk_buff *skb, diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c index 25f04266c668..9bbed3f14e42 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c @@ -221,128 +221,193 @@ ice_receive_skb(struct ice_rx_ring *rx_ring, struct sk_buff *skb, u16 vlan_tag) } /** + * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer + * @xdp_ring: XDP Tx ring + * @tx_buf: Tx buffer to clean + */ +static void +ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) +{ + dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + xdp_ring->xdp_tx_active--; + page_frag_free(tx_buf->raw_buf); + tx_buf->raw_buf = NULL; +} + +/** * ice_clean_xdp_irq - Reclaim resources after transmit completes on XDP ring * @xdp_ring: XDP ring to clean */ -static void ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring) +static u32 ice_clean_xdp_irq(struct ice_tx_ring *xdp_ring) { - unsigned int total_bytes = 0, total_pkts = 0; - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); - u16 ntc = xdp_ring->next_to_clean; - struct ice_tx_desc *next_dd_desc; - u16 next_dd = xdp_ring->next_dd; - struct ice_tx_buf *tx_buf; - int i; + int total_bytes = 0, total_pkts = 0; + u32 ntc = xdp_ring->next_to_clean; + struct ice_tx_desc *tx_desc; + u32 cnt = xdp_ring->count; + u32 ready_frames = 0; + u32 frags; + u32 idx; + u32 ret; + + idx = xdp_ring->tx_buf[ntc].rs_idx; + tx_desc = ICE_TX_DESC(xdp_ring, idx); + if (tx_desc->cmd_type_offset_bsz & + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) { + if (idx >= ntc) + ready_frames = idx - ntc + 1; + else + ready_frames = idx + cnt - ntc + 1; + } - next_dd_desc = ICE_TX_DESC(xdp_ring, next_dd); - if (!(next_dd_desc->cmd_type_offset_bsz & - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) - return; + if (!ready_frames) + return 0; + ret = ready_frames; - for (i = 0; i < tx_thresh; i++) { - tx_buf = &xdp_ring->tx_buf[ntc]; + while (ready_frames) { + struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc]; + /* bytecount holds size of head + frags */ total_bytes += tx_buf->bytecount; - /* normally tx_buf->gso_segs was taken but at this point - * it's always 1 for us - */ + frags = tx_buf->nr_frags; total_pkts++; + /* count head + frags */ + ready_frames -= frags + 1; - page_frag_free(tx_buf->raw_buf); - dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - tx_buf->raw_buf = NULL; - + if (xdp_ring->xsk_pool) + xsk_buff_free(tx_buf->xdp); + else + ice_clean_xdp_tx_buf(xdp_ring, tx_buf); ntc++; - if (ntc >= xdp_ring->count) + if (ntc == cnt) ntc = 0; + + for (int i = 0; i < frags; i++) { + tx_buf = &xdp_ring->tx_buf[ntc]; + + ice_clean_xdp_tx_buf(xdp_ring, tx_buf); + ntc++; + if (ntc == cnt) + ntc = 0; + } } - next_dd_desc->cmd_type_offset_bsz = 0; - xdp_ring->next_dd = xdp_ring->next_dd + tx_thresh; - if (xdp_ring->next_dd > xdp_ring->count) - xdp_ring->next_dd = tx_thresh - 1; + tx_desc->cmd_type_offset_bsz = 0; xdp_ring->next_to_clean = ntc; ice_update_tx_ring_stats(xdp_ring, total_pkts, total_bytes); + + return ret; } /** - * ice_xmit_xdp_ring - submit single packet to XDP ring for transmission - * @data: packet data pointer - * @size: packet data size + * __ice_xmit_xdp_ring - submit frame to XDP ring for transmission + * @xdp: XDP buffer to be placed onto Tx descriptors * @xdp_ring: XDP ring for transmission */ -int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring) +int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring) { - u16 tx_thresh = ICE_RING_QUARTER(xdp_ring); - u16 i = xdp_ring->next_to_use; + struct skb_shared_info *sinfo = NULL; + u32 size = xdp->data_end - xdp->data; + struct device *dev = xdp_ring->dev; + u32 ntu = xdp_ring->next_to_use; struct ice_tx_desc *tx_desc; + struct ice_tx_buf *tx_head; struct ice_tx_buf *tx_buf; - dma_addr_t dma; + u32 cnt = xdp_ring->count; + void *data = xdp->data; + u32 nr_frags = 0; + u32 free_space; + u32 frag = 0; + + free_space = ICE_DESC_UNUSED(xdp_ring); + + if (ICE_DESC_UNUSED(xdp_ring) < ICE_RING_QUARTER(xdp_ring)) + free_space += ice_clean_xdp_irq(xdp_ring); + + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); + nr_frags = sinfo->nr_frags; + if (free_space < nr_frags + 1) { + xdp_ring->ring_stats->tx_stats.tx_busy++; + return ICE_XDP_CONSUMED; + } + } - if (ICE_DESC_UNUSED(xdp_ring) < tx_thresh) - ice_clean_xdp_irq(xdp_ring); + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_head = &xdp_ring->tx_buf[ntu]; + tx_buf = tx_head; - if (!unlikely(ICE_DESC_UNUSED(xdp_ring))) { - xdp_ring->ring_stats->tx_stats.tx_busy++; - return ICE_XDP_CONSUMED; - } + for (;;) { + dma_addr_t dma; - dma = dma_map_single(xdp_ring->dev, data, size, DMA_TO_DEVICE); - if (dma_mapping_error(xdp_ring->dev, dma)) - return ICE_XDP_CONSUMED; + dma = dma_map_single(dev, data, size, DMA_TO_DEVICE); + if (dma_mapping_error(dev, dma)) + goto dma_unmap; - tx_buf = &xdp_ring->tx_buf[i]; - tx_buf->bytecount = size; - tx_buf->gso_segs = 1; - tx_buf->raw_buf = data; + /* record length, and DMA address */ + dma_unmap_len_set(tx_buf, len, size); + dma_unmap_addr_set(tx_buf, dma, dma); - /* record length, and DMA address */ - dma_unmap_len_set(tx_buf, len, size); - dma_unmap_addr_set(tx_buf, dma, dma); + tx_desc->buf_addr = cpu_to_le64(dma); + tx_desc->cmd_type_offset_bsz = ice_build_ctob(0, 0, size, 0); + tx_buf->raw_buf = data; - tx_desc = ICE_TX_DESC(xdp_ring, i); - tx_desc->buf_addr = cpu_to_le64(dma); - tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, 0, - size, 0); + ntu++; + if (ntu == cnt) + ntu = 0; - xdp_ring->xdp_tx_active++; - i++; - if (i == xdp_ring->count) { - i = 0; - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); - xdp_ring->next_rs = tx_thresh - 1; - } - xdp_ring->next_to_use = i; + if (frag == nr_frags) + break; - if (i > xdp_ring->next_rs) { - tx_desc = ICE_TX_DESC(xdp_ring, xdp_ring->next_rs); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); - xdp_ring->next_rs += tx_thresh; + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_buf = &xdp_ring->tx_buf[ntu]; + + data = skb_frag_address(&sinfo->frags[frag]); + size = skb_frag_size(&sinfo->frags[frag]); + frag++; } + /* store info about bytecount and frag count in first desc */ + tx_head->bytecount = xdp_get_buff_len(xdp); + tx_head->nr_frags = nr_frags; + + /* update last descriptor from a frame with EOP */ + tx_desc->cmd_type_offset_bsz |= + cpu_to_le64(ICE_TX_DESC_CMD_EOP << ICE_TXD_QW1_CMD_S); + + xdp_ring->xdp_tx_active++; + xdp_ring->next_to_use = ntu; + return ICE_XDP_TX; + +dma_unmap: + for (;;) { + tx_buf = &xdp_ring->tx_buf[ntu]; + dma_unmap_page(dev, dma_unmap_addr(tx_buf, dma), + dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); + dma_unmap_len_set(tx_buf, len, 0); + if (tx_buf == tx_head) + break; + + if (!ntu) + ntu += cnt; + ntu--; + } + return ICE_XDP_CONSUMED; } /** - * ice_xmit_xdp_buff - convert an XDP buffer to an XDP frame and send it - * @xdp: XDP buffer - * @xdp_ring: XDP Tx ring - * - * Returns negative on failure, 0 on success. + * ice_xmit_xdp_ring - submit frame to XDP ring for transmission + * @xdpf: XDP frame that will be converted to XDP buff + * @xdp_ring: XDP ring for transmission */ -int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring) +int ice_xmit_xdp_ring(struct xdp_frame *xdpf, struct ice_tx_ring *xdp_ring) { - struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp); - - if (unlikely(!xdpf)) - return ICE_XDP_CONSUMED; + struct xdp_buff xdp; - return ice_xmit_xdp_ring(xdpf->data, xdpf->len, xdp_ring); + xdp_convert_frame_to_buff(xdpf, &xdp); + return __ice_xmit_xdp_ring(&xdp, xdp_ring); } /** @@ -354,14 +419,21 @@ int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring) * should be called when a batch of packets has been processed in the * napi loop. */ -void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res) +void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res, + u32 first_idx) { + struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[first_idx]; + if (xdp_res & ICE_XDP_REDIR) xdp_do_flush_map(); if (xdp_res & ICE_XDP_TX) { if (static_branch_unlikely(&ice_xdp_locking_key)) spin_lock(&xdp_ring->tx_lock); + /* store index of descriptor with RS bit set in the first + * ice_tx_buf of given NAPI batch + */ + tx_buf->rs_idx = ice_set_rs_bit(xdp_ring); ice_xdp_ring_update_tail(xdp_ring); if (static_branch_unlikely(&ice_xdp_locking_key)) spin_unlock(&xdp_ring->tx_lock); diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h index c7d2954dc9ea..ea977f283c22 100644 --- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h +++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h @@ -6,6 +6,36 @@ #include "ice.h" /** + * ice_set_rx_bufs_act - propagate Rx buffer action to frags + * @xdp: XDP buffer representing frame (linear and frags part) + * @rx_ring: Rx ring struct + * act: action to store onto Rx buffers related to XDP buffer parts + * + * Set action that should be taken before putting Rx buffer from first frag + * to one before last. Last one is handled by caller of this function as it + * is the EOP frag that is currently being processed. This function is + * supposed to be called only when XDP buffer contains frags. + */ +static inline void +ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring, + const unsigned int act) +{ + const struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + u32 first = rx_ring->first_desc; + u32 nr_frags = sinfo->nr_frags; + u32 cnt = rx_ring->count; + struct ice_rx_buf *buf; + + for (int i = 0; i < nr_frags; i++) { + buf = &rx_ring->rx_buf[first]; + buf->act = act; + + if (++first == cnt) + first = 0; + } +} + +/** * ice_test_staterr - tests bits in Rx descriptor status and error fields * @status_err_n: Rx descriptor status_error0 or status_error1 bits * @stat_err_bits: value to mask @@ -21,6 +51,28 @@ ice_test_staterr(__le16 status_err_n, const u16 stat_err_bits) return !!(status_err_n & cpu_to_le16(stat_err_bits)); } +/** + * ice_is_non_eop - process handling of non-EOP buffers + * @rx_ring: Rx ring being processed + * @rx_desc: Rx descriptor for current buffer + * + * If the buffer is an EOP buffer, this function exits returning false, + * otherwise return true indicating that this is in fact a non-EOP buffer. + */ +static inline bool +ice_is_non_eop(const struct ice_rx_ring *rx_ring, + const union ice_32b_rx_flex_desc *rx_desc) +{ + /* if we are the last buffer then there is nothing else to do */ +#define ICE_RXD_EOF BIT(ICE_RX_FLEX_DESC_STATUS0_EOF_S) + if (likely(ice_test_staterr(rx_desc->wb.status_error0, ICE_RXD_EOF))) + return false; + + rx_ring->ring_stats->rx_stats.non_eop_descs++; + + return true; +} + static inline __le64 ice_build_ctob(u64 td_cmd, u64 td_offset, unsigned int size, u64 td_tag) { @@ -70,9 +122,28 @@ static inline void ice_xdp_ring_update_tail(struct ice_tx_ring *xdp_ring) writel_relaxed(xdp_ring->next_to_use, xdp_ring->tail); } -void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res); +/** + * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU) + * @xdp_ring: XDP ring to produce the HW Tx descriptors on + * + * returns index of descriptor that had RS bit produced on + */ +static inline u32 ice_set_rs_bit(const struct ice_tx_ring *xdp_ring) +{ + u32 rs_idx = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1; + struct ice_tx_desc *tx_desc; + + tx_desc = ICE_TX_DESC(xdp_ring, rs_idx); + tx_desc->cmd_type_offset_bsz |= + cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); + + return rs_idx; +} + +void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res, u32 first_idx); int ice_xmit_xdp_buff(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring); -int ice_xmit_xdp_ring(void *data, u16 size, struct ice_tx_ring *xdp_ring); +int ice_xmit_xdp_ring(struct xdp_frame *xdpf, struct ice_tx_ring *xdp_ring); +int __ice_xmit_xdp_ring(struct xdp_buff *xdp, struct ice_tx_ring *xdp_ring); void ice_release_rx_desc(struct ice_rx_ring *rx_ring, u16 val); void ice_process_skb_fields(struct ice_rx_ring *rx_ring, diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c index d4a4001b6e5d..f56fa94ff3d0 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c @@ -39,7 +39,7 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval, return ice_sq_send_cmd(hw, &hw->mailboxq, &desc, msg, msglen, cd); } -static const u32 ice_legacy_aq_to_vc_speed[15] = { +static const u32 ice_legacy_aq_to_vc_speed[] = { VIRTCHNL_LINK_SPEED_100MB, /* BIT(0) */ VIRTCHNL_LINK_SPEED_100MB, VIRTCHNL_LINK_SPEED_1GB, @@ -51,10 +51,6 @@ static const u32 ice_legacy_aq_to_vc_speed[15] = { VIRTCHNL_LINK_SPEED_40GB, VIRTCHNL_LINK_SPEED_40GB, VIRTCHNL_LINK_SPEED_40GB, - VIRTCHNL_LINK_SPEED_UNKNOWN, - VIRTCHNL_LINK_SPEED_UNKNOWN, - VIRTCHNL_LINK_SPEED_UNKNOWN, - VIRTCHNL_LINK_SPEED_UNKNOWN /* BIT(14) */ }; /** @@ -71,21 +67,20 @@ static const u32 ice_legacy_aq_to_vc_speed[15] = { */ u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed) { - u32 speed; + /* convert a BIT() value into an array index */ + u32 index = fls(link_speed) - 1; - if (adv_link_support) { - /* convert a BIT() value into an array index */ - speed = ice_get_link_speed(fls(link_speed) - 1); - } else { + if (adv_link_support) + return ice_get_link_speed(index); + else if (index < ARRAY_SIZE(ice_legacy_aq_to_vc_speed)) /* Virtchnl speeds are not defined for every speed supported in * the hardware. To maintain compatibility with older AVF * drivers, while reporting the speed the new speed values are * resolved to the closest known virtchnl speeds */ - speed = ice_legacy_aq_to_vc_speed[fls(link_speed) - 1]; - } + return ice_legacy_aq_to_vc_speed[index]; - return speed; + return VIRTCHNL_LINK_SPEED_UNKNOWN; } /* The mailbox overflow detection algorithm helps to check if there diff --git a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c index 5ecc0ee9a78e..b1ffb81893d4 100644 --- a/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c +++ b/drivers/net/ethernet/intel/ice/ice_vf_vsi_vlan_ops.c @@ -44,13 +44,17 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) /* outer VLAN ops regardless of port VLAN config */ vlan_ops->add_vlan = ice_vsi_add_vlan; - vlan_ops->dis_rx_filtering = ice_vsi_dis_rx_vlan_filtering; vlan_ops->ena_tx_filtering = ice_vsi_ena_tx_vlan_filtering; vlan_ops->dis_tx_filtering = ice_vsi_dis_tx_vlan_filtering; if (ice_vf_is_port_vlan_ena(vf)) { /* setup outer VLAN ops */ vlan_ops->set_port_vlan = ice_vsi_set_outer_port_vlan; + /* all Rx traffic should be in the domain of the + * assigned port VLAN, so prevent disabling Rx VLAN + * filtering + */ + vlan_ops->dis_rx_filtering = noop_vlan; vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; @@ -63,6 +67,9 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops->ena_insertion = ice_vsi_ena_inner_insertion; vlan_ops->dis_insertion = ice_vsi_dis_inner_insertion; } else { + vlan_ops->dis_rx_filtering = + ice_vsi_dis_rx_vlan_filtering; + if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) vlan_ops->ena_rx_filtering = noop_vlan; else @@ -96,7 +103,14 @@ void ice_vf_vsi_init_vlan_ops(struct ice_vsi *vsi) vlan_ops->set_port_vlan = ice_vsi_set_inner_port_vlan; vlan_ops->ena_rx_filtering = ice_vsi_ena_rx_vlan_filtering; + /* all Rx traffic should be in the domain of the + * assigned port VLAN, so prevent disabling Rx VLAN + * filtering + */ + vlan_ops->dis_rx_filtering = noop_vlan; } else { + vlan_ops->dis_rx_filtering = + ice_vsi_dis_rx_vlan_filtering; if (!test_bit(ICE_FLAG_VF_VLAN_PRUNING, pf->flags)) vlan_ops->ena_rx_filtering = noop_vlan; else diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c index 7105de6fb344..a25a68c69f22 100644 --- a/drivers/net/ethernet/intel/ice/ice_xsk.c +++ b/drivers/net/ethernet/intel/ice/ice_xsk.c @@ -598,6 +598,107 @@ ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp) } /** + * ice_clean_xdp_irq_zc - AF_XDP ZC specific Tx cleaning routine + * @xdp_ring: XDP Tx ring + */ +static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) +{ + u16 ntc = xdp_ring->next_to_clean; + struct ice_tx_desc *tx_desc; + u16 cnt = xdp_ring->count; + struct ice_tx_buf *tx_buf; + u16 xsk_frames = 0; + u16 last_rs; + int i; + + last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; + tx_desc = ICE_TX_DESC(xdp_ring, last_rs); + if (tx_desc->cmd_type_offset_bsz & + cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE)) { + if (last_rs >= ntc) + xsk_frames = last_rs - ntc + 1; + else + xsk_frames = last_rs + cnt - ntc + 1; + } + + if (!xsk_frames) + return; + + if (likely(!xdp_ring->xdp_tx_active)) + goto skip; + + ntc = xdp_ring->next_to_clean; + for (i = 0; i < xsk_frames; i++) { + tx_buf = &xdp_ring->tx_buf[ntc]; + + if (tx_buf->xdp) { + xsk_buff_free(tx_buf->xdp); + xdp_ring->xdp_tx_active--; + } else { + xsk_frames++; + } + + ntc++; + if (ntc == cnt) + ntc = 0; + } +skip: + tx_desc->cmd_type_offset_bsz = 0; + xdp_ring->next_to_clean += xsk_frames; + if (xdp_ring->next_to_clean >= cnt) + xdp_ring->next_to_clean -= cnt; + if (xsk_frames) + xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); +} + +/** + * ice_xmit_xdp_tx_zc - AF_XDP ZC handler for XDP_TX + * @xdp: XDP buffer to xmit + * @xdp_ring: XDP ring to produce descriptor onto + * + * note that this function works directly on xdp_buff, no need to convert + * it to xdp_frame. xdp_buff pointer is stored to ice_tx_buf so that cleaning + * side will be able to xsk_buff_free() it. + * + * Returns ICE_XDP_TX for successfully produced desc, ICE_XDP_CONSUMED if there + * was not enough space on XDP ring + */ +static int ice_xmit_xdp_tx_zc(struct xdp_buff *xdp, + struct ice_tx_ring *xdp_ring) +{ + u32 size = xdp->data_end - xdp->data; + u32 ntu = xdp_ring->next_to_use; + struct ice_tx_desc *tx_desc; + struct ice_tx_buf *tx_buf; + dma_addr_t dma; + + if (ICE_DESC_UNUSED(xdp_ring) < ICE_RING_QUARTER(xdp_ring)) { + ice_clean_xdp_irq_zc(xdp_ring); + if (!ICE_DESC_UNUSED(xdp_ring)) { + xdp_ring->ring_stats->tx_stats.tx_busy++; + return ICE_XDP_CONSUMED; + } + } + + dma = xsk_buff_xdp_get_dma(xdp); + xsk_buff_raw_dma_sync_for_device(xdp_ring->xsk_pool, dma, size); + + tx_buf = &xdp_ring->tx_buf[ntu]; + tx_buf->xdp = xdp; + tx_desc = ICE_TX_DESC(xdp_ring, ntu); + tx_desc->buf_addr = cpu_to_le64(dma); + tx_desc->cmd_type_offset_bsz = ice_build_ctob(ICE_TX_DESC_CMD_EOP, + 0, size, 0); + xdp_ring->xdp_tx_active++; + + if (++ntu == xdp_ring->count) + ntu = 0; + xdp_ring->next_to_use = ntu; + + return ICE_XDP_TX; +} + +/** * ice_run_xdp_zc - Executes an XDP program in zero-copy path * @rx_ring: Rx ring * @xdp: xdp_buff used as input to the XDP program @@ -630,7 +731,7 @@ ice_run_xdp_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp, case XDP_PASS: break; case XDP_TX: - result = ice_xmit_xdp_buff(xdp, xdp_ring); + result = ice_xmit_xdp_tx_zc(xdp, xdp_ring); if (result == ICE_XDP_CONSUMED) goto out_failure; break; @@ -760,7 +861,7 @@ construct_skb: if (entries_to_alloc > ICE_RING_QUARTER(rx_ring)) failure |= !ice_alloc_rx_bufs_zc(rx_ring, entries_to_alloc); - ice_finalize_xdp_rx(xdp_ring, xdp_xmit); + ice_finalize_xdp_rx(xdp_ring, xdp_xmit, 0); ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes); if (xsk_uses_need_wakeup(rx_ring->xsk_pool)) { @@ -776,75 +877,6 @@ construct_skb: } /** - * ice_clean_xdp_tx_buf - Free and unmap XDP Tx buffer - * @xdp_ring: XDP Tx ring - * @tx_buf: Tx buffer to clean - */ -static void -ice_clean_xdp_tx_buf(struct ice_tx_ring *xdp_ring, struct ice_tx_buf *tx_buf) -{ - page_frag_free(tx_buf->raw_buf); - xdp_ring->xdp_tx_active--; - dma_unmap_single(xdp_ring->dev, dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); -} - -/** - * ice_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ - * @xdp_ring: XDP Tx ring - */ -static void ice_clean_xdp_irq_zc(struct ice_tx_ring *xdp_ring) -{ - u16 ntc = xdp_ring->next_to_clean; - struct ice_tx_desc *tx_desc; - u16 cnt = xdp_ring->count; - struct ice_tx_buf *tx_buf; - u16 xsk_frames = 0; - u16 last_rs; - int i; - - last_rs = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : cnt - 1; - tx_desc = ICE_TX_DESC(xdp_ring, last_rs); - if ((tx_desc->cmd_type_offset_bsz & - cpu_to_le64(ICE_TX_DESC_DTYPE_DESC_DONE))) { - if (last_rs >= ntc) - xsk_frames = last_rs - ntc + 1; - else - xsk_frames = last_rs + cnt - ntc + 1; - } - - if (!xsk_frames) - return; - - if (likely(!xdp_ring->xdp_tx_active)) - goto skip; - - ntc = xdp_ring->next_to_clean; - for (i = 0; i < xsk_frames; i++) { - tx_buf = &xdp_ring->tx_buf[ntc]; - - if (tx_buf->raw_buf) { - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); - tx_buf->raw_buf = NULL; - } else { - xsk_frames++; - } - - ntc++; - if (ntc >= xdp_ring->count) - ntc = 0; - } -skip: - tx_desc->cmd_type_offset_bsz = 0; - xdp_ring->next_to_clean += xsk_frames; - if (xdp_ring->next_to_clean >= cnt) - xdp_ring->next_to_clean -= cnt; - if (xsk_frames) - xsk_tx_completed(xdp_ring->xsk_pool, xsk_frames); -} - -/** * ice_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor * @xdp_ring: XDP ring to produce the HW Tx descriptor on * @desc: AF_XDP descriptor to pull the DMA address and length from @@ -918,20 +950,6 @@ static void ice_fill_tx_hw_ring(struct ice_tx_ring *xdp_ring, struct xdp_desc *d } /** - * ice_set_rs_bit - set RS bit on last produced descriptor (one behind current NTU) - * @xdp_ring: XDP ring to produce the HW Tx descriptors on - */ -static void ice_set_rs_bit(struct ice_tx_ring *xdp_ring) -{ - u16 ntu = xdp_ring->next_to_use ? xdp_ring->next_to_use - 1 : xdp_ring->count - 1; - struct ice_tx_desc *tx_desc; - - tx_desc = ICE_TX_DESC(xdp_ring, ntu); - tx_desc->cmd_type_offset_bsz |= - cpu_to_le64(ICE_TX_DESC_CMD_RS << ICE_TXD_QW1_CMD_S); -} - -/** * ice_xmit_zc - take entries from XSK Tx ring and place them onto HW Tx ring * @xdp_ring: XDP ring to produce the HW Tx descriptors on * @@ -1065,8 +1083,8 @@ void ice_xsk_clean_xdp_ring(struct ice_tx_ring *xdp_ring) while (ntc != ntu) { struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[ntc]; - if (tx_buf->raw_buf) - ice_clean_xdp_tx_buf(xdp_ring, tx_buf); + if (tx_buf->xdp) + xsk_buff_free(tx_buf->xdp); else xsk_frames++; diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c index 45fbd8346de7..087c950fea0b 100644 --- a/drivers/net/ethernet/intel/igb/igb_main.c +++ b/drivers/net/ethernet/intel/igb/igb_main.c @@ -2889,8 +2889,14 @@ static int igb_xdp_setup(struct net_device *dev, struct netdev_bpf *bpf) bpf_prog_put(old_prog); /* bpf is just replaced, RXQ and MTU are already setup */ - if (!need_reset) + if (!need_reset) { return 0; + } else { + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + } if (running) igb_open(dev); @@ -3333,6 +3339,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent) netdev->priv_flags |= IFF_SUPP_NOFCS; netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; /* MTU range: 68 - 9216 */ netdev->min_mtu = ETH_MIN_MTU; diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c index 4c626f756a8b..2928a6c73692 100644 --- a/drivers/net/ethernet/intel/igc/igc_main.c +++ b/drivers/net/ethernet/intel/igc/igc_main.c @@ -2942,7 +2942,9 @@ static bool igc_clean_tx_irq(struct igc_q_vector *q_vector, int napi_budget) if (tx_buffer->next_to_watch && time_after(jiffies, tx_buffer->time_stamp + (adapter->tx_timeout_factor * HZ)) && - !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF)) { + !(rd32(IGC_STATUS) & IGC_STATUS_TXOFF) && + (rd32(IGC_TDH(tx_ring->reg_idx)) != + readl(tx_ring->tail))) { /* detected Tx unit hang */ netdev_err(tx_ring->netdev, "Detected Tx Unit Hang\n" @@ -5069,6 +5071,24 @@ static int igc_change_mtu(struct net_device *netdev, int new_mtu) } /** + * igc_tx_timeout - Respond to a Tx Hang + * @netdev: network interface device structure + * @txqueue: queue number that timed out + **/ +static void igc_tx_timeout(struct net_device *netdev, + unsigned int __always_unused txqueue) +{ + struct igc_adapter *adapter = netdev_priv(netdev); + struct igc_hw *hw = &adapter->hw; + + /* Do the reset outside of interrupt context */ + adapter->tx_timeout_count++; + schedule_work(&adapter->reset_task); + wr32(IGC_EICS, + (adapter->eims_enable_mask & ~adapter->eims_other)); +} + +/** * igc_get_stats64 - Get System Network Statistics * @netdev: network interface device structure * @stats: rtnl_link_stats64 pointer @@ -5495,7 +5515,7 @@ static void igc_watchdog_task(struct work_struct *work) case SPEED_100: case SPEED_1000: case SPEED_2500: - adapter->tx_timeout_factor = 7; + adapter->tx_timeout_factor = 1; break; } @@ -6347,6 +6367,7 @@ static const struct net_device_ops igc_netdev_ops = { .ndo_set_rx_mode = igc_set_rx_mode, .ndo_set_mac_address = igc_set_mac, .ndo_change_mtu = igc_change_mtu, + .ndo_tx_timeout = igc_tx_timeout, .ndo_get_stats64 = igc_get_stats64, .ndo_fix_features = igc_fix_features, .ndo_set_features = igc_set_features, @@ -6554,6 +6575,9 @@ static int igc_probe(struct pci_dev *pdev, netdev->mpls_features |= NETIF_F_HW_CSUM; netdev->hw_enc_features |= netdev->vlan_features; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; + /* MTU range: 68 - 9216 */ netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = MAX_STD_JUMBO_FRAME_SIZE; diff --git a/drivers/net/ethernet/intel/igc/igc_xdp.c b/drivers/net/ethernet/intel/igc/igc_xdp.c index aeeb34e64610..e27af72aada8 100644 --- a/drivers/net/ethernet/intel/igc/igc_xdp.c +++ b/drivers/net/ethernet/intel/igc/igc_xdp.c @@ -29,6 +29,11 @@ int igc_xdp_set_prog(struct igc_adapter *adapter, struct bpf_prog *prog, if (old_prog) bpf_prog_put(old_prog); + if (prog) + xdp_features_set_redirect_target(dev, true); + else + xdp_features_clear_redirect_target(dev); + if (if_running) igc_open(dev); diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c index 992b7ae75233..a3aaf051f9f1 100644 --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c @@ -10301,6 +10301,8 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) if (err) return -EINVAL; + if (!prog) + xdp_features_clear_redirect_target(dev); } else { for (i = 0; i < adapter->num_rx_queues; i++) { WRITE_ONCE(adapter->rx_ring[i]->xdp_prog, @@ -10321,6 +10323,7 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog) if (adapter->xdp_ring[i]->xsk_pool) (void)ixgbe_xsk_wakeup(adapter->netdev, i, XDP_WAKEUP_RX); + xdp_features_set_redirect_target(dev, true); } return 0; @@ -11016,6 +11019,9 @@ skip_sriov: netdev->priv_flags |= IFF_UNICAST_FLT; netdev->priv_flags |= IFF_SUPP_NOFCS; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY; + /* MTU range: 68 - 9710 */ netdev->min_mtu = ETH_MIN_MTU; netdev->max_mtu = IXGBE_MAX_JUMBO_FRAME_SIZE - (ETH_HLEN + ETH_FCS_LEN); diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c index ea0a230c1153..a44e4bd56142 100644 --- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c +++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c @@ -4634,6 +4634,7 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) NETIF_F_HW_VLAN_CTAG_TX; netdev->priv_flags |= IFF_UNICAST_FLT; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; /* MTU range: 68 - 1504 or 9710 */ netdev->min_mtu = ETH_MIN_MTU; diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c index a48588c80317..1cb4f59c0050 100644 --- a/drivers/net/ethernet/marvell/mvneta.c +++ b/drivers/net/ethernet/marvell/mvneta.c @@ -5612,6 +5612,9 @@ static int mvneta_probe(struct platform_device *pdev) NETIF_F_TSO | NETIF_F_RXCSUM; dev->hw_features |= dev->features; dev->vlan_features |= dev->features; + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; dev->priv_flags |= IFF_LIVE_ADDR_CHANGE; netif_set_tso_max_segs(dev, MVNETA_MAX_TSO_SEGS); diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c index 4da45c5abba5..9b4ecbe4f36d 100644 --- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c @@ -6866,6 +6866,10 @@ static int mvpp2_port_probe(struct platform_device *pdev, dev->vlan_features |= features; netif_set_tso_max_segs(dev, MVPP2_MAX_TSO_SEGS); + + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + dev->priv_flags |= IFF_UNICAST_FLT; /* MTU range: 68 - 9704 */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c index c1ea60bc2630..179433d0a54a 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c @@ -2512,10 +2512,13 @@ static int otx2_xdp_setup(struct otx2_nic *pf, struct bpf_prog *prog) /* Network stack and XDP shared same rx queues. * Use separate tx queues for XDP and network stack. */ - if (pf->xdp_prog) + if (pf->xdp_prog) { pf->hw.xdp_queues = pf->hw.rx_queues; - else + xdp_features_set_redirect_target(dev, false); + } else { pf->hw.xdp_queues = 0; + xdp_features_clear_redirect_target(dev); + } pf->hw.tot_tx_queues += pf->hw.xdp_queues; @@ -2878,6 +2881,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id) netdev->watchdog_timeo = OTX2_TX_TIMEOUT; netdev->netdev_ops = &otx2_netdev_ops; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; netdev->min_mtu = OTX2_MIN_MTU; netdev->max_mtu = otx2_get_max_mtu(pf); diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c index f1cb1efc94cf..14be6ea51b88 100644 --- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c +++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c @@ -1621,8 +1621,8 @@ static struct page_pool *mtk_create_page_pool(struct mtk_eth *eth, if (IS_ERR(pp)) return pp; - err = __xdp_rxq_info_reg(xdp_q, ð->dummy_dev, eth->rx_napi.napi_id, - id, PAGE_SIZE); + err = __xdp_rxq_info_reg(xdp_q, ð->dummy_dev, id, + eth->rx_napi.napi_id, PAGE_SIZE); if (err < 0) goto err_free_pp; @@ -1921,7 +1921,9 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, while (done < budget) { unsigned int pktlen, *rxdcsum; + bool has_hwaccel_tag = false; struct net_device *netdev; + u16 vlan_proto, vlan_tci; dma_addr_t dma_addr; u32 hash, reason; int mac = 0; @@ -2061,27 +2063,29 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget, if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) { if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { - if (trxd.rxd3 & RX_DMA_VTAG_V2) - __vlan_hwaccel_put_tag(skb, - htons(RX_DMA_VPID(trxd.rxd4)), - RX_DMA_VID(trxd.rxd4)); + if (trxd.rxd3 & RX_DMA_VTAG_V2) { + vlan_proto = RX_DMA_VPID(trxd.rxd4); + vlan_tci = RX_DMA_VID(trxd.rxd4); + has_hwaccel_tag = true; + } } else if (trxd.rxd2 & RX_DMA_VTAG) { - __vlan_hwaccel_put_tag(skb, htons(RX_DMA_VPID(trxd.rxd3)), - RX_DMA_VID(trxd.rxd3)); + vlan_proto = RX_DMA_VPID(trxd.rxd3); + vlan_tci = RX_DMA_VID(trxd.rxd3); + has_hwaccel_tag = true; } } /* When using VLAN untagging in combination with DSA, the * hardware treats the MTK special tag as a VLAN and untags it. */ - if (skb_vlan_tag_present(skb) && netdev_uses_dsa(netdev)) { - unsigned int port = ntohs(skb->vlan_proto) & GENMASK(2, 0); + if (has_hwaccel_tag && netdev_uses_dsa(netdev)) { + unsigned int port = vlan_proto & GENMASK(2, 0); if (port < ARRAY_SIZE(eth->dsa_meta) && eth->dsa_meta[port]) skb_dst_set_noref(skb, ð->dsa_meta[port]->dst); - - __vlan_hwaccel_clear_tag(skb); + } else if (has_hwaccel_tag) { + __vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci); } skb_record_rx_queue(skb, 0); @@ -3177,7 +3181,7 @@ static void mtk_gdm_config(struct mtk_eth *eth, u32 config) val |= config; - if (!i && eth->netdev[0] && netdev_uses_dsa(eth->netdev[0])) + if (eth->netdev[i] && netdev_uses_dsa(eth->netdev[i])) val |= MTK_GDMA_SPECIAL_TAG; mtk_w32(eth, val, MTK_GDMA_FWD_CFG(i)); @@ -3243,8 +3247,7 @@ static int mtk_open(struct net_device *dev) struct mtk_eth *eth = mac->hw; int i, err; - if ((mtk_uses_dsa(dev) && !eth->prog) && - !(mac->id == 1 && MTK_HAS_CAPS(eth->soc->caps, MTK_GMAC1_TRGMII))) { + if (mtk_uses_dsa(dev) && !eth->prog) { for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) { struct metadata_dst *md_dst = eth->dsa_meta[i]; @@ -3261,8 +3264,7 @@ static int mtk_open(struct net_device *dev) } } else { /* Hardware special tag parsing needs to be disabled if at least - * one MAC does not use DSA, or the second MAC of the MT7621 and - * MT7623 SoCs is being used. + * one MAC does not use DSA. */ u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL); val &= ~MTK_CDMP_STAG_EN; @@ -4449,6 +4451,12 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np) register_netdevice_notifier(&mac->device_notifier); } + if (mtk_page_pool_enabled(eth)) + eth->netdev[id]->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG; + return 0; free_netdev: diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c index af4c4858f397..e11bc0ac880e 100644 --- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c +++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c @@ -3416,6 +3416,8 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port, priv->rss_hash_fn = ETH_RSS_HASH_TOP; } + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + /* MTU range: 68 - hw-specific max */ dev->min_mtu = ETH_MIN_MTU; dev->max_mtu = priv->max_mtu; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c index 3e232a65a0c3..bb95b40d25eb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/debugfs.c @@ -245,8 +245,9 @@ void mlx5_pages_debugfs_init(struct mlx5_core_dev *dev) pages = dev->priv.dbg.pages_debugfs; debugfs_create_u32("fw_pages_total", 0400, pages, &dev->priv.fw_pages); - debugfs_create_u32("fw_pages_vfs", 0400, pages, &dev->priv.vfs_pages); - debugfs_create_u32("fw_pages_host_pf", 0400, pages, &dev->priv.host_pf_pages); + debugfs_create_u32("fw_pages_vfs", 0400, pages, &dev->priv.page_counters[MLX5_VF]); + debugfs_create_u32("fw_pages_sfs", 0400, pages, &dev->priv.page_counters[MLX5_SF]); + debugfs_create_u32("fw_pages_host_pf", 0400, pages, &dev->priv.page_counters[MLX5_HOST_PF]); debugfs_create_u32("fw_pages_alloc_failed", 0400, pages, &dev->priv.fw_pages_alloc_failed); debugfs_create_u32("fw_pages_give_dropped", 0400, pages, &dev->priv.give_pages_dropped); debugfs_create_u32("fw_pages_reclaim_discard", 0400, pages, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c index f7864d51d4c4..f40497823e65 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c @@ -64,6 +64,7 @@ static int mlx5_query_mtrc_caps(struct mlx5_fw_tracer *tracer) MLX5_GET(mtrc_cap, out, num_string_trace); tracer->str_db.num_string_db = MLX5_GET(mtrc_cap, out, num_string_db); tracer->owner = !!MLX5_GET(mtrc_cap, out, trace_owner); + tracer->str_db.loaded = false; for (i = 0; i < tracer->str_db.num_string_db; i++) { mtrc_cap_sp = MLX5_ADDR_OF(mtrc_cap, out, string_db_param[i]); @@ -783,6 +784,7 @@ static int mlx5_fw_tracer_set_mtrc_conf(struct mlx5_fw_tracer *tracer) if (err) mlx5_core_warn(dev, "FWTracer: Failed to set tracer configurations %d\n", err); + tracer->buff.consumer_index = 0; return err; } @@ -847,7 +849,6 @@ static void mlx5_fw_tracer_ownership_change(struct work_struct *work) mlx5_core_dbg(tracer->dev, "FWTracer: ownership changed, current=(%d)\n", tracer->owner); if (tracer->owner) { tracer->owner = false; - tracer->buff.consumer_index = 0; return; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c index b70e36025d92..9a3878f9e582 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ecpf.c @@ -95,7 +95,7 @@ void mlx5_ec_cleanup(struct mlx5_core_dev *dev) mlx5_host_pf_cleanup(dev); - err = mlx5_wait_for_pages(dev, &dev->priv.host_pf_pages); + err = mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_HOST_PF]); if (err) mlx5_core_warn(dev, "Timeout reclaiming external host PF pages err(%d)\n", err); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c index 8099a21e674c..ce85b48d327d 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c @@ -438,10 +438,6 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb, switch (event) { case SWITCHDEV_FDB_ADD_TO_BRIDGE: - /* only handle the event on native eswtich of representor */ - if (!mlx5_esw_bridge_is_local(dev, rep, esw)) - break; - fdb_info = container_of(info, struct switchdev_notifier_fdb_info, info); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c index 7298fe782e9e..05796f8b1d7c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c @@ -450,7 +450,7 @@ void mlx5e_enable_cvlan_filter(struct mlx5e_flow_steering *fs, bool promisc) void mlx5e_disable_cvlan_filter(struct mlx5e_flow_steering *fs, bool promisc) { - if (fs->vlan->cvlan_filter_disabled) + if (!fs->vlan || fs->vlan->cvlan_filter_disabled) return; fs->vlan->cvlan_filter_disabled = true; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index 6ac4c092a591..ec81d935262f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -597,7 +597,8 @@ static int mlx5e_init_rxq_rq(struct mlx5e_channel *c, struct mlx5e_params *param rq->ix = c->ix; rq->channel = c; rq->mdev = mdev; - rq->hw_mtu = MLX5E_SW2HW_MTU(params, params->sw_mtu); + rq->hw_mtu = + MLX5E_SW2HW_MTU(params, params->sw_mtu) - ETH_FCS_LEN * !params->scatter_fcs_en; rq->xdpsq = &c->rq_xdpsq; rq->stats = &c->priv->channel_stats[c->ix]->rq; rq->ptp_cyc2time = mlx5_rq_ts_translator(mdev); @@ -1020,35 +1021,6 @@ int mlx5e_flush_rq(struct mlx5e_rq *rq, int curr_state) return mlx5e_rq_to_ready(rq, curr_state); } -static int mlx5e_modify_rq_scatter_fcs(struct mlx5e_rq *rq, bool enable) -{ - struct mlx5_core_dev *mdev = rq->mdev; - - void *in; - void *rqc; - int inlen; - int err; - - inlen = MLX5_ST_SZ_BYTES(modify_rq_in); - in = kvzalloc(inlen, GFP_KERNEL); - if (!in) - return -ENOMEM; - - rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx); - - MLX5_SET(modify_rq_in, in, rq_state, MLX5_RQC_STATE_RDY); - MLX5_SET64(modify_rq_in, in, modify_bitmask, - MLX5_MODIFY_RQ_IN_MODIFY_BITMASK_SCATTER_FCS); - MLX5_SET(rqc, rqc, scatter_fcs, enable); - MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY); - - err = mlx5_core_modify_rq(mdev, rq->rqn, in); - - kvfree(in); - - return err; -} - static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd) { struct mlx5_core_dev *mdev = rq->mdev; @@ -3328,20 +3300,6 @@ static void mlx5e_cleanup_nic_tx(struct mlx5e_priv *priv) mlx5e_destroy_tises(priv); } -static int mlx5e_modify_channels_scatter_fcs(struct mlx5e_channels *chs, bool enable) -{ - int err = 0; - int i; - - for (i = 0; i < chs->num; i++) { - err = mlx5e_modify_rq_scatter_fcs(&chs->c[i]->rq, enable); - if (err) - return err; - } - - return 0; -} - static int mlx5e_modify_channels_vsd(struct mlx5e_channels *chs, bool vsd) { int err; @@ -3917,41 +3875,27 @@ static int mlx5e_set_rx_port_ts(struct mlx5_core_dev *mdev, bool enable) return mlx5_set_ports_check(mdev, in, sizeof(in)); } +static int mlx5e_set_rx_port_ts_wrap(struct mlx5e_priv *priv, void *ctx) +{ + struct mlx5_core_dev *mdev = priv->mdev; + bool enable = *(bool *)ctx; + + return mlx5e_set_rx_port_ts(mdev, enable); +} + static int set_feature_rx_fcs(struct net_device *netdev, bool enable) { struct mlx5e_priv *priv = netdev_priv(netdev); struct mlx5e_channels *chs = &priv->channels; - struct mlx5_core_dev *mdev = priv->mdev; + struct mlx5e_params new_params; int err; mutex_lock(&priv->state_lock); - if (enable) { - err = mlx5e_set_rx_port_ts(mdev, false); - if (err) - goto out; - - chs->params.scatter_fcs_en = true; - err = mlx5e_modify_channels_scatter_fcs(chs, true); - if (err) { - chs->params.scatter_fcs_en = false; - mlx5e_set_rx_port_ts(mdev, true); - } - } else { - chs->params.scatter_fcs_en = false; - err = mlx5e_modify_channels_scatter_fcs(chs, false); - if (err) { - chs->params.scatter_fcs_en = true; - goto out; - } - err = mlx5e_set_rx_port_ts(mdev, true); - if (err) { - mlx5_core_warn(mdev, "Failed to set RX port timestamp %d\n", err); - err = 0; - } - } - -out: + new_params = chs->params; + new_params.scatter_fcs_en = enable; + err = mlx5e_safe_switch_params(priv, &new_params, mlx5e_set_rx_port_ts_wrap, + &new_params.scatter_fcs_en, true); mutex_unlock(&priv->state_lock); return err; } @@ -4088,6 +4032,10 @@ static netdev_features_t mlx5e_fix_uplink_rep_features(struct net_device *netdev if (netdev->features & NETIF_F_GRO_HW) netdev_warn(netdev, "Disabling HW_GRO, not supported in switchdev mode\n"); + features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; + if (netdev->features & NETIF_F_HW_VLAN_CTAG_FILTER) + netdev_warn(netdev, "Disabling HW_VLAN CTAG FILTERING, not supported in switchdev mode\n"); + return features; } @@ -4793,6 +4741,13 @@ static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) if (old_prog) bpf_prog_put(old_prog); + if (reset) { + if (prog) + xdp_features_set_redirect_target(netdev, true); + else + xdp_features_clear_redirect_target(netdev); + } + if (!test_bit(MLX5E_STATE_OPENED, &priv->state) || reset) goto unlock; @@ -5188,6 +5143,10 @@ static void mlx5e_build_nic_netdev(struct net_device *netdev) netdev->features |= NETIF_F_HIGHDMA; netdev->features |= NETIF_F_HW_VLAN_STAG_FILTER; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_XSK_ZEROCOPY | + NETDEV_XDP_ACT_RX_SG; + netdev->priv_flags |= IFF_UNICAST_FLT; netif_set_tso_max_size(netdev, GSO_MAX_SIZE); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c index b176648d1343..3cdcb0e0b20f 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge.c @@ -1715,7 +1715,7 @@ void mlx5_esw_bridge_fdb_update_used(struct net_device *dev, u16 vport_num, u16 struct mlx5_esw_bridge *bridge; port = mlx5_esw_bridge_port_lookup(vport_num, esw_owner_vhca_id, br_offloads); - if (!port || port->flags & MLX5_ESW_BRIDGE_PORT_FLAG_PEER) + if (!port) return; bridge = port->bridge; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c index 11fefb99d685..779d92b762d3 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ethtool.c @@ -191,16 +191,16 @@ static inline int mlx5_ptys_rate_enum_to_int(enum mlx5_ptys_rate rate) } } -static int mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper) +static u32 mlx5i_get_speed_settings(u16 ib_link_width_oper, u16 ib_proto_oper) { int rate, width; rate = mlx5_ptys_rate_enum_to_int(ib_proto_oper); if (rate < 0) - return -EINVAL; + return SPEED_UNKNOWN; width = mlx5_ptys_width_enum_to_int(ib_link_width_oper); if (width < 0) - return -EINVAL; + return SPEED_UNKNOWN; return rate * width; } @@ -223,16 +223,13 @@ static int mlx5i_get_link_ksettings(struct net_device *netdev, ethtool_link_ksettings_zero_link_mode(link_ksettings, advertising); speed = mlx5i_get_speed_settings(ib_link_width_oper, ib_proto_oper); - if (speed < 0) - return -EINVAL; + link_ksettings->base.speed = speed; + link_ksettings->base.duplex = speed == SPEED_UNKNOWN ? DUPLEX_UNKNOWN : DUPLEX_FULL; - link_ksettings->base.duplex = DUPLEX_FULL; link_ksettings->base.port = PORT_OTHER; link_ksettings->base.autoneg = AUTONEG_DISABLE; - link_ksettings->base.speed = speed; - return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 0de9180e4f6b..26e1057845fe 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -2131,7 +2131,7 @@ static int __init mlx5_init(void) mlx5_core_verify_params(); mlx5_register_debugfs(); - err = pci_register_driver(&mlx5_core_driver); + err = mlx5e_init(); if (err) goto err_debug; @@ -2139,16 +2139,16 @@ static int __init mlx5_init(void) if (err) goto err_sf; - err = mlx5e_init(); + err = pci_register_driver(&mlx5_core_driver); if (err) - goto err_en; + goto err_pci; return 0; -err_en: +err_pci: mlx5_sf_driver_unregister(); err_sf: - pci_unregister_driver(&mlx5_core_driver); + mlx5e_cleanup(); err_debug: mlx5_unregister_debugfs(); return err; @@ -2156,9 +2156,9 @@ err_debug: static void __exit mlx5_cleanup(void) { - mlx5e_cleanup(); - mlx5_sf_driver_unregister(); pci_unregister_driver(&mlx5_core_driver); + mlx5_sf_driver_unregister(); + mlx5e_cleanup(); mlx5_unregister_debugfs(); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index 96e57f1812a4..64d4e7125e9b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -74,6 +74,14 @@ static u32 get_function(u16 func_id, bool ec_function) return (u32)func_id | (ec_function << 16); } +static u16 func_id_to_type(struct mlx5_core_dev *dev, u16 func_id, bool ec_function) +{ + if (!func_id) + return mlx5_core_is_ecpf(dev) && !ec_function ? MLX5_HOST_PF : MLX5_PF; + + return func_id <= mlx5_core_max_vfs(dev) ? MLX5_VF : MLX5_SF; +} + static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function) { struct rb_root *root; @@ -333,6 +341,7 @@ static int give_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, u32 out[MLX5_ST_SZ_DW(manage_pages_out)] = {0}; int inlen = MLX5_ST_SZ_BYTES(manage_pages_in); int notify_fail = event; + u16 func_type; u64 addr; int err; u32 *in; @@ -384,11 +393,9 @@ retry: goto out_dropped; } + func_type = func_id_to_type(dev, func_id, ec_function); + dev->priv.page_counters[func_type] += npages; dev->priv.fw_pages += npages; - if (func_id) - dev->priv.vfs_pages += npages; - else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.host_pf_pages += npages; mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x, err %d\n", npages, ec_function, func_id, err); @@ -415,6 +422,7 @@ static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id, struct rb_root *root; struct rb_node *p; int npages = 0; + u16 func_type; root = xa_load(&dev->priv.page_root_xa, function); if (WARN_ON_ONCE(!root)) @@ -429,11 +437,9 @@ static void release_all_pages(struct mlx5_core_dev *dev, u16 func_id, free_fwp(dev, fwp, fwp->free_count); } + func_type = func_id_to_type(dev, func_id, ec_function); + dev->priv.page_counters[func_type] -= npages; dev->priv.fw_pages -= npages; - if (func_id) - dev->priv.vfs_pages -= npages; - else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.host_pf_pages -= npages; mlx5_core_dbg(dev, "npages %d, ec_function %d, func_id 0x%x\n", npages, ec_function, func_id); @@ -499,6 +505,7 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, int outlen = MLX5_ST_SZ_BYTES(manage_pages_out); u32 in[MLX5_ST_SZ_DW(manage_pages_in)] = {}; int num_claimed; + u16 func_type; u32 *out; int err; int i; @@ -550,11 +557,9 @@ static int reclaim_pages(struct mlx5_core_dev *dev, u16 func_id, int npages, if (nclaimed) *nclaimed = num_claimed; + func_type = func_id_to_type(dev, func_id, ec_function); + dev->priv.page_counters[func_type] -= num_claimed; dev->priv.fw_pages -= num_claimed; - if (func_id) - dev->priv.vfs_pages -= num_claimed; - else if (mlx5_core_is_ecpf(dev) && !ec_function) - dev->priv.host_pf_pages -= num_claimed; out_free: kvfree(out); @@ -707,12 +712,12 @@ int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev) WARN(dev->priv.fw_pages, "FW pages counter is %d after reclaiming all pages\n", dev->priv.fw_pages); - WARN(dev->priv.vfs_pages, + WARN(dev->priv.page_counters[MLX5_VF], "VFs FW pages counter is %d after reclaiming all pages\n", - dev->priv.vfs_pages); - WARN(dev->priv.host_pf_pages, + dev->priv.page_counters[MLX5_VF]); + WARN(dev->priv.page_counters[MLX5_HOST_PF], "External host PF FW pages counter is %d after reclaiming all pages\n", - dev->priv.host_pf_pages); + dev->priv.page_counters[MLX5_HOST_PF]); return 0; } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c index c0e6c487c63c..3008e9ce2bbf 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c @@ -147,7 +147,7 @@ mlx5_device_disable_sriov(struct mlx5_core_dev *dev, int num_vfs, bool clear_vf) mlx5_eswitch_disable_sriov(dev->priv.eswitch, clear_vf); - if (mlx5_wait_for_pages(dev, &dev->priv.vfs_pages)) + if (mlx5_wait_for_pages(dev, &dev->priv.page_counters[MLX5_VF])) mlx5_core_warn(dev, "timeout reclaiming VFs pages\n"); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c index b851141e03de..042ca0349124 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_rule.c @@ -1138,12 +1138,14 @@ dr_rule_create_rule_nic(struct mlx5dr_rule *rule, rule->flow_source)) return 0; + mlx5dr_domain_nic_lock(nic_dmn); + ret = mlx5dr_matcher_select_builders(matcher, nic_matcher, dr_rule_get_ipv(¶m->outer), dr_rule_get_ipv(¶m->inner)); if (ret) - return ret; + goto err_unlock; hw_ste_arr_is_opt = nic_matcher->num_of_builders <= DR_RULE_MAX_STES_OPTIMIZED; if (likely(hw_ste_arr_is_opt)) { @@ -1152,12 +1154,12 @@ dr_rule_create_rule_nic(struct mlx5dr_rule *rule, hw_ste_arr = kzalloc((nic_matcher->num_of_builders + DR_ACTION_MAX_STES) * DR_STE_SIZE, GFP_KERNEL); - if (!hw_ste_arr) - return -ENOMEM; + if (!hw_ste_arr) { + ret = -ENOMEM; + goto err_unlock; + } } - mlx5dr_domain_nic_lock(nic_dmn); - ret = mlx5dr_matcher_add_to_tbl_nic(dmn, nic_matcher); if (ret) goto free_hw_ste; @@ -1223,7 +1225,10 @@ dr_rule_create_rule_nic(struct mlx5dr_rule *rule, mlx5dr_domain_nic_unlock(nic_dmn); - goto out; + if (unlikely(!hw_ste_arr_is_opt)) + kfree(hw_ste_arr); + + return 0; free_rule: dr_rule_clean_rule_members(rule, nic_rule); @@ -1238,12 +1243,12 @@ remove_from_nic_tbl: mlx5dr_matcher_remove_from_tbl_nic(dmn, nic_matcher); free_hw_ste: - mlx5dr_domain_nic_unlock(nic_dmn); - -out: - if (unlikely(!hw_ste_arr_is_opt)) + if (!hw_ste_arr_is_opt) kfree(hw_ste_arr); +err_unlock: + mlx5dr_domain_nic_unlock(nic_dmn); + return ret; } diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c index 1e464bb804ae..bd10a7189741 100644 --- a/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c +++ b/drivers/net/ethernet/microchip/lan966x/lan966x_tc_flower.c @@ -3,53 +3,135 @@ #include "lan966x_main.h" #include "vcap_api.h" #include "vcap_api_client.h" +#include "vcap_tc.h" -struct lan966x_tc_flower_parse_usage { - struct flow_cls_offload *f; - struct flow_rule *frule; - struct vcap_rule *vrule; - unsigned int used_keys; - u16 l3_proto; -}; +static bool lan966x_tc_is_known_etype(u16 etype) +{ + switch (etype) { + case ETH_P_ALL: + case ETH_P_ARP: + case ETH_P_IP: + case ETH_P_IPV6: + return true; + } -static int lan966x_tc_flower_handler_ethaddr_usage(struct lan966x_tc_flower_parse_usage *st) + return false; +} + +static int +lan966x_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st) { - enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; - enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; - struct flow_match_eth_addrs match; - struct vcap_u48_key smac, dmac; + struct flow_match_control match; int err = 0; - flow_rule_match_eth_addrs(st->frule, &match); - - if (!is_zero_ether_addr(match.mask->src)) { - vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); - vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); + flow_rule_match_control(st->frule, &match); + if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) { + if (match.key->flags & FLOW_DIS_IS_FRAGMENT) + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAGMENT, + VCAP_BIT_1); + else + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAGMENT, + VCAP_BIT_0); if (err) goto out; } - if (!is_zero_ether_addr(match.mask->dst)) { - vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); - vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); + if (match.mask->flags & FLOW_DIS_FIRST_FRAG) { + if (match.key->flags & FLOW_DIS_FIRST_FRAG) + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAG_OFS_GT0, + VCAP_BIT_0); + else + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_L3_FRAG_OFS_GT0, + VCAP_BIT_1); if (err) goto out; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); return err; out: - NL_SET_ERR_MSG_MOD(st->f->common.extack, "eth_addr parse error"); + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); + return err; +} + +static int +lan966x_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_basic match; + int err = 0; + + flow_rule_match_basic(st->frule, &match); + if (match.mask->n_proto) { + st->l3_proto = be16_to_cpu(match.key->n_proto); + if (!lan966x_tc_is_known_etype(st->l3_proto)) { + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ETYPE, + st->l3_proto, ~0); + if (err) + goto out; + } else if (st->l3_proto == ETH_P_IP) { + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_IP4_IS, + VCAP_BIT_1); + if (err) + goto out; + } + } + if (match.mask->ip_proto) { + st->l4_proto = match.key->ip_proto; + + if (st->l4_proto == IPPROTO_TCP) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_TCP_IS, + VCAP_BIT_1); + if (err) + goto out; + } else if (st->l4_proto == IPPROTO_UDP) { + err = vcap_rule_add_key_bit(st->vrule, + VCAP_KF_TCP_IS, + VCAP_BIT_0); + if (err) + goto out; + } else { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP_PROTO, + st->l4_proto, ~0); + if (err) + goto out; + } + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_BASIC); + return err; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_proto parse error"); return err; } static int -(*lan966x_tc_flower_handlers_usage[])(struct lan966x_tc_flower_parse_usage *st) = { - [FLOW_DISSECTOR_KEY_ETH_ADDRS] = lan966x_tc_flower_handler_ethaddr_usage, +lan966x_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st) +{ + return vcap_tc_flower_handler_vlan_usage(st, + VCAP_KF_8021Q_VID_CLS, + VCAP_KF_8021Q_PCP_CLS); +} + +static int +(*lan966x_tc_flower_handlers_usage[])(struct vcap_tc_flower_parse_usage *st) = { + [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage, + [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage, + [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage, + [FLOW_DISSECTOR_KEY_CONTROL] = lan966x_tc_flower_handler_control_usage, + [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage, + [FLOW_DISSECTOR_KEY_BASIC] = lan966x_tc_flower_handler_basic_usage, + [FLOW_DISSECTOR_KEY_VLAN] = lan966x_tc_flower_handler_vlan_usage, + [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage, + [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage, + [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage, }; static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f, @@ -57,8 +139,8 @@ static int lan966x_tc_flower_use_dissectors(struct flow_cls_offload *f, struct vcap_rule *vrule, u16 *l3_proto) { - struct lan966x_tc_flower_parse_usage state = { - .f = f, + struct vcap_tc_flower_parse_usage state = { + .fco = f, .vrule = vrule, .l3_proto = ETH_P_ALL, }; diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c index af85d66248b2..0edb98cef7e4 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c @@ -632,7 +632,7 @@ int sparx5_ptp_init(struct sparx5 *sparx5) /* Enable master counters */ spx5_wr(PTP_PTP_DOM_CFG_PTP_ENA_SET(0x7), sparx5, PTP_PTP_DOM_CFG); - for (i = 0; i < sparx5->port_count; i++) { + for (i = 0; i < SPX5_PORTS; i++) { port = sparx5->ports[i]; if (!port) continue; @@ -648,7 +648,7 @@ void sparx5_ptp_deinit(struct sparx5 *sparx5) struct sparx5_port *port; int i; - for (i = 0; i < sparx5->port_count; i++) { + for (i = 0; i < SPX5_PORTS; i++) { port = sparx5->ports[i]; if (!port) continue; diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c index f962304272c2..d73668dcc6b6 100644 --- a/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c +++ b/drivers/net/ethernet/microchip/sparx5/sparx5_tc_flower.c @@ -10,6 +10,7 @@ #include "sparx5_tc.h" #include "vcap_api.h" #include "vcap_api_client.h" +#include "vcap_tc.h" #include "sparx5_main.h" #include "sparx5_vcap_impl.h" @@ -27,223 +28,8 @@ struct sparx5_multiple_rules { struct sparx5_wildcard_rule rule[SPX5_MAX_RULE_SIZE]; }; -struct sparx5_tc_flower_parse_usage { - struct flow_cls_offload *fco; - struct flow_rule *frule; - struct vcap_rule *vrule; - struct vcap_admin *admin; - u16 l3_proto; - u8 l4_proto; - unsigned int used_keys; -}; - -enum sparx5_is2_arp_opcode { - SPX5_IS2_ARP_REQUEST, - SPX5_IS2_ARP_REPLY, - SPX5_IS2_RARP_REQUEST, - SPX5_IS2_RARP_REPLY, -}; - -enum tc_arp_opcode { - TC_ARP_OP_RESERVED, - TC_ARP_OP_REQUEST, - TC_ARP_OP_REPLY, -}; - -static int sparx5_tc_flower_handler_ethaddr_usage(struct sparx5_tc_flower_parse_usage *st) -{ - enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; - enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; - struct flow_match_eth_addrs match; - struct vcap_u48_key smac, dmac; - int err = 0; - - flow_rule_match_eth_addrs(st->frule, &match); - - if (!is_zero_ether_addr(match.mask->src)) { - vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); - vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); - if (err) - goto out; - } - - if (!is_zero_ether_addr(match.mask->dst)) { - vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); - vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); - err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error"); - return err; -} - static int -sparx5_tc_flower_handler_ipv4_usage(struct sparx5_tc_flower_parse_usage *st) -{ - int err = 0; - - if (st->l3_proto == ETH_P_IP) { - struct flow_match_ipv4_addrs mt; - - flow_rule_match_ipv4_addrs(st->frule, &mt); - if (mt.mask->src) { - err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_IP4_SIP, - be32_to_cpu(mt.key->src), - be32_to_cpu(mt.mask->src)); - if (err) - goto out; - } - if (mt.mask->dst) { - err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_IP4_DIP, - be32_to_cpu(mt.key->dst), - be32_to_cpu(mt.mask->dst)); - if (err) - goto out; - } - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_ipv6_usage(struct sparx5_tc_flower_parse_usage *st) -{ - int err = 0; - - if (st->l3_proto == ETH_P_IPV6) { - struct flow_match_ipv6_addrs mt; - struct vcap_u128_key sip; - struct vcap_u128_key dip; - - flow_rule_match_ipv6_addrs(st->frule, &mt); - /* Check if address masks are non-zero */ - if (!ipv6_addr_any(&mt.mask->src)) { - vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16); - vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16); - err = vcap_rule_add_key_u128(st->vrule, - VCAP_KF_L3_IP6_SIP, &sip); - if (err) - goto out; - } - if (!ipv6_addr_any(&mt.mask->dst)) { - vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16); - vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16); - err = vcap_rule_add_key_u128(st->vrule, - VCAP_KF_L3_IP6_DIP, &dip); - if (err) - goto out; - } - } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS); - return err; -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_control_usage(struct sparx5_tc_flower_parse_usage *st) -{ - struct flow_match_control mt; - u32 value, mask; - int err = 0; - - flow_rule_match_control(st->frule, &mt); - - if (mt.mask->flags) { - if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) { - if (mt.key->flags & FLOW_DIS_FIRST_FRAG) { - value = 1; /* initial fragment */ - mask = 0x3; - } else { - if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { - value = 3; /* follow up fragment */ - mask = 0x3; - } else { - value = 0; /* no fragment */ - mask = 0x3; - } - } - } else { - if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { - value = 3; /* follow up fragment */ - mask = 0x3; - } else { - value = 0; /* no fragment */ - mask = 0x3; - } - } - - err = vcap_rule_add_key_u32(st->vrule, - VCAP_KF_L3_FRAGMENT_TYPE, - value, mask); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_portnum_usage(struct sparx5_tc_flower_parse_usage *st) -{ - struct flow_match_ports mt; - u16 value, mask; - int err = 0; - - flow_rule_match_ports(st->frule, &mt); - - if (mt.mask->src) { - value = be16_to_cpu(mt.key->src); - mask = be16_to_cpu(mt.mask->src); - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value, - mask); - if (err) - goto out; - } - - if (mt.mask->dst) { - value = be16_to_cpu(mt.key->dst); - mask = be16_to_cpu(mt.mask->dst); - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value, - mask); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_basic_usage(struct vcap_tc_flower_parse_usage *st) { struct flow_match_basic mt; int err = 0; @@ -274,7 +60,6 @@ sparx5_tc_flower_handler_basic_usage(struct sparx5_tc_flower_parse_usage *st) if (err) goto out; } - } } @@ -318,268 +103,92 @@ out: } static int -sparx5_tc_flower_handler_cvlan_usage(struct sparx5_tc_flower_parse_usage *st) -{ - enum vcap_key_field vid_key = VCAP_KF_8021Q_VID0; - enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP0; - struct flow_match_vlan mt; - u16 tpid; - int err; - - if (st->admin->vtype != VCAP_TYPE_IS0) { - NL_SET_ERR_MSG_MOD(st->fco->common.extack, - "cvlan not supported in this VCAP"); - return -EINVAL; - } - - flow_rule_match_cvlan(st->frule, &mt); - - tpid = be16_to_cpu(mt.key->vlan_tpid); - - if (tpid == ETH_P_8021Q) { - vid_key = VCAP_KF_8021Q_VID1; - pcp_key = VCAP_KF_8021Q_PCP1; - } - - if (mt.mask->vlan_id) { - err = vcap_rule_add_key_u32(st->vrule, vid_key, - mt.key->vlan_id, - mt.mask->vlan_id); - if (err) - goto out; - } - - if (mt.mask->vlan_priority) { - err = vcap_rule_add_key_u32(st->vrule, pcp_key, - mt.key->vlan_priority, - mt.mask->vlan_priority); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN); - - return 0; -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "cvlan parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_vlan_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_control_usage(struct vcap_tc_flower_parse_usage *st) { - enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS; - enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS; - struct flow_match_vlan mt; - int err; - - flow_rule_match_vlan(st->frule, &mt); - - if (st->admin->vtype == VCAP_TYPE_IS0) { - vid_key = VCAP_KF_8021Q_VID0; - pcp_key = VCAP_KF_8021Q_PCP0; - } - - if (mt.mask->vlan_id) { - err = vcap_rule_add_key_u32(st->vrule, vid_key, - mt.key->vlan_id, - mt.mask->vlan_id); - if (err) - goto out; - } - - if (mt.mask->vlan_priority) { - err = vcap_rule_add_key_u32(st->vrule, pcp_key, - mt.key->vlan_priority, - mt.mask->vlan_priority); - if (err) - goto out; - } - - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN); - - return 0; -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error"); - return err; -} - -static int -sparx5_tc_flower_handler_tcp_usage(struct sparx5_tc_flower_parse_usage *st) -{ - struct flow_match_tcp mt; - u16 tcp_flags_mask; - u16 tcp_flags_key; - enum vcap_bit val; + struct flow_match_control mt; + u32 value, mask; int err = 0; - flow_rule_match_tcp(st->frule, &mt); - tcp_flags_key = be16_to_cpu(mt.key->flags); - tcp_flags_mask = be16_to_cpu(mt.mask->flags); - - if (tcp_flags_mask & TCPHDR_FIN) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_FIN) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_SYN) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_SYN) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_RST) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_RST) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val); - if (err) - goto out; - } - - if (tcp_flags_mask & TCPHDR_PSH) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_PSH) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val); - if (err) - goto out; - } + flow_rule_match_control(st->frule, &mt); - if (tcp_flags_mask & TCPHDR_ACK) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_ACK) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val); - if (err) - goto out; - } + if (mt.mask->flags) { + if (mt.mask->flags & FLOW_DIS_FIRST_FRAG) { + if (mt.key->flags & FLOW_DIS_FIRST_FRAG) { + value = 1; /* initial fragment */ + mask = 0x3; + } else { + if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { + value = 3; /* follow up fragment */ + mask = 0x3; + } else { + value = 0; /* no fragment */ + mask = 0x3; + } + } + } else { + if (mt.mask->flags & FLOW_DIS_IS_FRAGMENT) { + value = 3; /* follow up fragment */ + mask = 0x3; + } else { + value = 0; /* no fragment */ + mask = 0x3; + } + } - if (tcp_flags_mask & TCPHDR_URG) { - val = VCAP_BIT_0; - if (tcp_flags_key & TCPHDR_URG) - val = VCAP_BIT_1; - err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val); + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_FRAGMENT_TYPE, + value, mask); if (err) goto out; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP); + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CONTROL); return err; out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error"); + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_frag parse error"); return err; } static int -sparx5_tc_flower_handler_arp_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st) { - struct flow_match_arp mt; - u16 value, mask; - u32 ipval, ipmsk; - int err; - - flow_rule_match_arp(st->frule, &mt); - - if (mt.mask->op) { - mask = 0x3; - if (st->l3_proto == ETH_P_ARP) { - value = mt.key->op == TC_ARP_OP_REQUEST ? - SPX5_IS2_ARP_REQUEST : - SPX5_IS2_ARP_REPLY; - } else { /* RARP */ - value = mt.key->op == TC_ARP_OP_REQUEST ? - SPX5_IS2_RARP_REQUEST : - SPX5_IS2_RARP_REPLY; - } - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE, - value, mask); - if (err) - goto out; - } - - /* The IS2 ARP keyset does not support ARP hardware addresses */ - if (!is_zero_ether_addr(mt.mask->sha) || - !is_zero_ether_addr(mt.mask->tha)) { - err = -EINVAL; - goto out; - } - - if (mt.mask->sip) { - ipval = be32_to_cpu((__force __be32)mt.key->sip); - ipmsk = be32_to_cpu((__force __be32)mt.mask->sip); - - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP, - ipval, ipmsk); - if (err) - goto out; - } - - if (mt.mask->tip) { - ipval = be32_to_cpu((__force __be32)mt.key->tip); - ipmsk = be32_to_cpu((__force __be32)mt.mask->tip); - - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP, - ipval, ipmsk); - if (err) - goto out; + if (st->admin->vtype != VCAP_TYPE_IS0) { + NL_SET_ERR_MSG_MOD(st->fco->common.extack, + "cvlan not supported in this VCAP"); + return -EINVAL; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP); - - return 0; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error"); - return err; + return vcap_tc_flower_handler_cvlan_usage(st); } static int -sparx5_tc_flower_handler_ip_usage(struct sparx5_tc_flower_parse_usage *st) +sparx5_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st) { - struct flow_match_ip mt; - int err = 0; - - flow_rule_match_ip(st->frule, &mt); + enum vcap_key_field vid_key = VCAP_KF_8021Q_VID_CLS; + enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP_CLS; - if (mt.mask->tos) { - err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS, - mt.key->tos, - mt.mask->tos); - if (err) - goto out; + if (st->admin->vtype == VCAP_TYPE_IS0) { + vid_key = VCAP_KF_8021Q_VID0; + pcp_key = VCAP_KF_8021Q_PCP0; } - st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP); - - return err; - -out: - NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error"); - return err; + return vcap_tc_flower_handler_vlan_usage(st, vid_key, pcp_key); } -static int (*sparx5_tc_flower_usage_handlers[])(struct sparx5_tc_flower_parse_usage *st) = { - [FLOW_DISSECTOR_KEY_ETH_ADDRS] = sparx5_tc_flower_handler_ethaddr_usage, - [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = sparx5_tc_flower_handler_ipv4_usage, - [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = sparx5_tc_flower_handler_ipv6_usage, +static int (*sparx5_tc_flower_usage_handlers[])(struct vcap_tc_flower_parse_usage *st) = { + [FLOW_DISSECTOR_KEY_ETH_ADDRS] = vcap_tc_flower_handler_ethaddr_usage, + [FLOW_DISSECTOR_KEY_IPV4_ADDRS] = vcap_tc_flower_handler_ipv4_usage, + [FLOW_DISSECTOR_KEY_IPV6_ADDRS] = vcap_tc_flower_handler_ipv6_usage, [FLOW_DISSECTOR_KEY_CONTROL] = sparx5_tc_flower_handler_control_usage, - [FLOW_DISSECTOR_KEY_PORTS] = sparx5_tc_flower_handler_portnum_usage, + [FLOW_DISSECTOR_KEY_PORTS] = vcap_tc_flower_handler_portnum_usage, [FLOW_DISSECTOR_KEY_BASIC] = sparx5_tc_flower_handler_basic_usage, [FLOW_DISSECTOR_KEY_CVLAN] = sparx5_tc_flower_handler_cvlan_usage, [FLOW_DISSECTOR_KEY_VLAN] = sparx5_tc_flower_handler_vlan_usage, - [FLOW_DISSECTOR_KEY_TCP] = sparx5_tc_flower_handler_tcp_usage, - [FLOW_DISSECTOR_KEY_ARP] = sparx5_tc_flower_handler_arp_usage, - [FLOW_DISSECTOR_KEY_IP] = sparx5_tc_flower_handler_ip_usage, + [FLOW_DISSECTOR_KEY_TCP] = vcap_tc_flower_handler_tcp_usage, + [FLOW_DISSECTOR_KEY_ARP] = vcap_tc_flower_handler_arp_usage, + [FLOW_DISSECTOR_KEY_IP] = vcap_tc_flower_handler_ip_usage, }; static int sparx5_tc_use_dissectors(struct flow_cls_offload *fco, @@ -587,7 +196,7 @@ static int sparx5_tc_use_dissectors(struct flow_cls_offload *fco, struct vcap_rule *vrule, u16 *l3_proto) { - struct sparx5_tc_flower_parse_usage state = { + struct vcap_tc_flower_parse_usage state = { .fco = fco, .vrule = vrule, .l3_proto = ETH_P_ALL, diff --git a/drivers/net/ethernet/microchip/vcap/Makefile b/drivers/net/ethernet/microchip/vcap/Makefile index 0adb8f5a8735..c86f20e6491f 100644 --- a/drivers/net/ethernet/microchip/vcap/Makefile +++ b/drivers/net/ethernet/microchip/vcap/Makefile @@ -7,4 +7,4 @@ obj-$(CONFIG_VCAP) += vcap.o obj-$(CONFIG_VCAP_KUNIT_TEST) += vcap_model_kunit.o vcap-$(CONFIG_DEBUG_FS) += vcap_api_debugfs.o -vcap-y += vcap_api.o +vcap-y += vcap_api.o vcap_tc.o diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.c b/drivers/net/ethernet/microchip/vcap/vcap_tc.c new file mode 100644 index 000000000000..09a994a7cec2 --- /dev/null +++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.c @@ -0,0 +1,409 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* Microchip VCAP TC + * + * Copyright (c) 2023 Microchip Technology Inc. and its subsidiaries. + */ + +#include <net/flow_offload.h> +#include <net/ipv6.h> +#include <net/tcp.h> + +#include "vcap_api_client.h" +#include "vcap_tc.h" + +enum vcap_is2_arp_opcode { + VCAP_IS2_ARP_REQUEST, + VCAP_IS2_ARP_REPLY, + VCAP_IS2_RARP_REQUEST, + VCAP_IS2_RARP_REPLY, +}; + +enum vcap_arp_opcode { + VCAP_ARP_OP_RESERVED, + VCAP_ARP_OP_REQUEST, + VCAP_ARP_OP_REPLY, +}; + +int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st) +{ + enum vcap_key_field smac_key = VCAP_KF_L2_SMAC; + enum vcap_key_field dmac_key = VCAP_KF_L2_DMAC; + struct flow_match_eth_addrs match; + struct vcap_u48_key smac, dmac; + int err = 0; + + flow_rule_match_eth_addrs(st->frule, &match); + + if (!is_zero_ether_addr(match.mask->src)) { + vcap_netbytes_copy(smac.value, match.key->src, ETH_ALEN); + vcap_netbytes_copy(smac.mask, match.mask->src, ETH_ALEN); + err = vcap_rule_add_key_u48(st->vrule, smac_key, &smac); + if (err) + goto out; + } + + if (!is_zero_ether_addr(match.mask->dst)) { + vcap_netbytes_copy(dmac.value, match.key->dst, ETH_ALEN); + vcap_netbytes_copy(dmac.mask, match.mask->dst, ETH_ALEN); + err = vcap_rule_add_key_u48(st->vrule, dmac_key, &dmac); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ETH_ADDRS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "eth_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ethaddr_usage); + +int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st) +{ + int err = 0; + + if (st->l3_proto == ETH_P_IP) { + struct flow_match_ipv4_addrs mt; + + flow_rule_match_ipv4_addrs(st->frule, &mt); + if (mt.mask->src) { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP4_SIP, + be32_to_cpu(mt.key->src), + be32_to_cpu(mt.mask->src)); + if (err) + goto out; + } + if (mt.mask->dst) { + err = vcap_rule_add_key_u32(st->vrule, + VCAP_KF_L3_IP4_DIP, + be32_to_cpu(mt.key->dst), + be32_to_cpu(mt.mask->dst)); + if (err) + goto out; + } + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV4_ADDRS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv4_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv4_usage); + +int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st) +{ + int err = 0; + + if (st->l3_proto == ETH_P_IPV6) { + struct flow_match_ipv6_addrs mt; + struct vcap_u128_key sip; + struct vcap_u128_key dip; + + flow_rule_match_ipv6_addrs(st->frule, &mt); + /* Check if address masks are non-zero */ + if (!ipv6_addr_any(&mt.mask->src)) { + vcap_netbytes_copy(sip.value, mt.key->src.s6_addr, 16); + vcap_netbytes_copy(sip.mask, mt.mask->src.s6_addr, 16); + err = vcap_rule_add_key_u128(st->vrule, + VCAP_KF_L3_IP6_SIP, &sip); + if (err) + goto out; + } + if (!ipv6_addr_any(&mt.mask->dst)) { + vcap_netbytes_copy(dip.value, mt.key->dst.s6_addr, 16); + vcap_netbytes_copy(dip.mask, mt.mask->dst.s6_addr, 16); + err = vcap_rule_add_key_u128(st->vrule, + VCAP_KF_L3_IP6_DIP, &dip); + if (err) + goto out; + } + } + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IPV6_ADDRS); + return err; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ipv6_addr parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ipv6_usage); + +int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_ports mt; + u16 value, mask; + int err = 0; + + flow_rule_match_ports(st->frule, &mt); + + if (mt.mask->src) { + value = be16_to_cpu(mt.key->src); + mask = be16_to_cpu(mt.mask->src); + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_SPORT, value, + mask); + if (err) + goto out; + } + + if (mt.mask->dst) { + value = be16_to_cpu(mt.key->dst); + mask = be16_to_cpu(mt.mask->dst); + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L4_DPORT, value, + mask); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_PORTS); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "port parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_portnum_usage); + +int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st) +{ + enum vcap_key_field vid_key = VCAP_KF_8021Q_VID0; + enum vcap_key_field pcp_key = VCAP_KF_8021Q_PCP0; + struct flow_match_vlan mt; + u16 tpid; + int err; + + flow_rule_match_cvlan(st->frule, &mt); + + tpid = be16_to_cpu(mt.key->vlan_tpid); + + if (tpid == ETH_P_8021Q) { + vid_key = VCAP_KF_8021Q_VID1; + pcp_key = VCAP_KF_8021Q_PCP1; + } + + if (mt.mask->vlan_id) { + err = vcap_rule_add_key_u32(st->vrule, vid_key, + mt.key->vlan_id, + mt.mask->vlan_id); + if (err) + goto out; + } + + if (mt.mask->vlan_priority) { + err = vcap_rule_add_key_u32(st->vrule, pcp_key, + mt.key->vlan_priority, + mt.mask->vlan_priority); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_CVLAN); + + return 0; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "cvlan parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_cvlan_usage); + +int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st, + enum vcap_key_field vid_key, + enum vcap_key_field pcp_key) +{ + struct flow_match_vlan mt; + int err; + + flow_rule_match_vlan(st->frule, &mt); + + if (mt.mask->vlan_id) { + err = vcap_rule_add_key_u32(st->vrule, vid_key, + mt.key->vlan_id, + mt.mask->vlan_id); + if (err) + goto out; + } + + if (mt.mask->vlan_priority) { + err = vcap_rule_add_key_u32(st->vrule, pcp_key, + mt.key->vlan_priority, + mt.mask->vlan_priority); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_VLAN); + + return 0; +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "vlan parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_vlan_usage); + +int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_tcp mt; + u16 tcp_flags_mask; + u16 tcp_flags_key; + enum vcap_bit val; + int err = 0; + + flow_rule_match_tcp(st->frule, &mt); + tcp_flags_key = be16_to_cpu(mt.key->flags); + tcp_flags_mask = be16_to_cpu(mt.mask->flags); + + if (tcp_flags_mask & TCPHDR_FIN) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_FIN) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_FIN, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_SYN) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_SYN) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_SYN, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_RST) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_RST) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_RST, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_PSH) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_PSH) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_PSH, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_ACK) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_ACK) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_ACK, val); + if (err) + goto out; + } + + if (tcp_flags_mask & TCPHDR_URG) { + val = VCAP_BIT_0; + if (tcp_flags_key & TCPHDR_URG) + val = VCAP_BIT_1; + err = vcap_rule_add_key_bit(st->vrule, VCAP_KF_L4_URG, val); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_TCP); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "tcp_flags parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_tcp_usage); + +int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_arp mt; + u16 value, mask; + u32 ipval, ipmsk; + int err; + + flow_rule_match_arp(st->frule, &mt); + + if (mt.mask->op) { + mask = 0x3; + if (st->l3_proto == ETH_P_ARP) { + value = mt.key->op == VCAP_ARP_OP_REQUEST ? + VCAP_IS2_ARP_REQUEST : + VCAP_IS2_ARP_REPLY; + } else { /* RARP */ + value = mt.key->op == VCAP_ARP_OP_REQUEST ? + VCAP_IS2_RARP_REQUEST : + VCAP_IS2_RARP_REPLY; + } + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_ARP_OPCODE, + value, mask); + if (err) + goto out; + } + + /* The IS2 ARP keyset does not support ARP hardware addresses */ + if (!is_zero_ether_addr(mt.mask->sha) || + !is_zero_ether_addr(mt.mask->tha)) { + err = -EINVAL; + goto out; + } + + if (mt.mask->sip) { + ipval = be32_to_cpu((__force __be32)mt.key->sip); + ipmsk = be32_to_cpu((__force __be32)mt.mask->sip); + + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_SIP, + ipval, ipmsk); + if (err) + goto out; + } + + if (mt.mask->tip) { + ipval = be32_to_cpu((__force __be32)mt.key->tip); + ipmsk = be32_to_cpu((__force __be32)mt.mask->tip); + + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_IP4_DIP, + ipval, ipmsk); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_ARP); + + return 0; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "arp parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_arp_usage); + +int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st) +{ + struct flow_match_ip mt; + int err = 0; + + flow_rule_match_ip(st->frule, &mt); + + if (mt.mask->tos) { + err = vcap_rule_add_key_u32(st->vrule, VCAP_KF_L3_TOS, + mt.key->tos, + mt.mask->tos); + if (err) + goto out; + } + + st->used_keys |= BIT(FLOW_DISSECTOR_KEY_IP); + + return err; + +out: + NL_SET_ERR_MSG_MOD(st->fco->common.extack, "ip_tos parse error"); + return err; +} +EXPORT_SYMBOL_GPL(vcap_tc_flower_handler_ip_usage); diff --git a/drivers/net/ethernet/microchip/vcap/vcap_tc.h b/drivers/net/ethernet/microchip/vcap/vcap_tc.h new file mode 100644 index 000000000000..5c55ccbee175 --- /dev/null +++ b/drivers/net/ethernet/microchip/vcap/vcap_tc.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Copyright (C) 2023 Microchip Technology Inc. and its subsidiaries. + * Microchip VCAP TC + */ + +#ifndef __VCAP_TC__ +#define __VCAP_TC__ + +struct vcap_tc_flower_parse_usage { + struct flow_cls_offload *fco; + struct flow_rule *frule; + struct vcap_rule *vrule; + struct vcap_admin *admin; + u16 l3_proto; + u8 l4_proto; + unsigned int used_keys; +}; + +int vcap_tc_flower_handler_ethaddr_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ipv4_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ipv6_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_portnum_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_cvlan_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_vlan_usage(struct vcap_tc_flower_parse_usage *st, + enum vcap_key_field vid_key, + enum vcap_key_field pcp_key); +int vcap_tc_flower_handler_tcp_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_arp_usage(struct vcap_tc_flower_parse_usage *st); +int vcap_tc_flower_handler_ip_usage(struct vcap_tc_flower_parse_usage *st); + +#endif /* __VCAP_TC__ */ diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c index b144f2237748..f9b8f372ec8a 100644 --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c @@ -1217,9 +1217,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) unsigned int max_queues_per_port = num_online_cpus(); struct gdma_context *gc = pci_get_drvdata(pdev); struct gdma_irq_context *gic; - unsigned int max_irqs; - u16 *cpus; - cpumask_var_t req_mask; + unsigned int max_irqs, cpu; int nvec, irq; int err, i = 0, j; @@ -1240,21 +1238,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) goto free_irq_vector; } - if (!zalloc_cpumask_var(&req_mask, GFP_KERNEL)) { - err = -ENOMEM; - goto free_irq; - } - - cpus = kcalloc(nvec, sizeof(*cpus), GFP_KERNEL); - if (!cpus) { - err = -ENOMEM; - goto free_mask; - } - for (i = 0; i < nvec; i++) - cpus[i] = cpumask_local_spread(i, gc->numa_node); - for (i = 0; i < nvec; i++) { - cpumask_set_cpu(cpus[i], req_mask); gic = &gc->irq_contexts[i]; gic->handler = NULL; gic->arg = NULL; @@ -1269,17 +1253,16 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) irq = pci_irq_vector(pdev, i); if (irq < 0) { err = irq; - goto free_mask; + goto free_irq; } err = request_irq(irq, mana_gd_intr, 0, gic->name, gic); if (err) - goto free_mask; - irq_set_affinity_and_hint(irq, req_mask); - cpumask_clear(req_mask); + goto free_irq; + + cpu = cpumask_local_spread(i, gc->numa_node); + irq_set_affinity_and_hint(irq, cpumask_of(cpu)); } - free_cpumask_var(req_mask); - kfree(cpus); err = mana_gd_alloc_res_map(nvec, &gc->msix_resource); if (err) @@ -1290,13 +1273,12 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev) return 0; -free_mask: - free_cpumask_var(req_mask); - kfree(cpus); free_irq: for (j = i - 1; j >= 0; j--) { irq = pci_irq_vector(pdev, j); gic = &gc->irq_contexts[j]; + + irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); } @@ -1324,6 +1306,9 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev) continue; gic = &gc->irq_contexts[i]; + + /* Need to clear the hint before free_irq */ + irq_update_affinity_hint(irq, NULL); free_irq(irq, gic); } diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c index 2f6a048dee90..6120f2b6684f 100644 --- a/drivers/net/ethernet/microsoft/mana/mana_en.c +++ b/drivers/net/ethernet/microsoft/mana/mana_en.c @@ -2160,6 +2160,8 @@ static int mana_probe_port(struct mana_context *ac, int port_idx, ndev->hw_features |= NETIF_F_RXHASH; ndev->features = ndev->hw_features; ndev->vlan_features = 0; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; err = register_netdev(ndev); if (err) { diff --git a/drivers/net/ethernet/mscc/Kconfig b/drivers/net/ethernet/mscc/Kconfig index 8dd8c7f425d2..81e605691bb8 100644 --- a/drivers/net/ethernet/mscc/Kconfig +++ b/drivers/net/ethernet/mscc/Kconfig @@ -13,6 +13,7 @@ if NET_VENDOR_MICROSEMI # Users should depend on NET_SWITCHDEV, HAS_IOMEM, BRIDGE config MSCC_OCELOT_SWITCH_LIB + depends on PTP_1588_CLOCK_OPTIONAL select NET_DEVLINK select REGMAP_MMIO select PACKING diff --git a/drivers/net/ethernet/mscc/ocelot_flower.c b/drivers/net/ethernet/mscc/ocelot_flower.c index 7c0897e779dc..ee052404eb55 100644 --- a/drivers/net/ethernet/mscc/ocelot_flower.c +++ b/drivers/net/ethernet/mscc/ocelot_flower.c @@ -605,6 +605,18 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress, flow_rule_match_control(rule, &match); } + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { + struct flow_match_vlan match; + + flow_rule_match_vlan(rule, &match); + filter->key_type = OCELOT_VCAP_KEY_ANY; + filter->vlan.vid.value = match.key->vlan_id; + filter->vlan.vid.mask = match.mask->vlan_id; + filter->vlan.pcp.value[0] = match.key->vlan_priority; + filter->vlan.pcp.mask[0] = match.mask->vlan_priority; + match_protocol = false; + } + if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) { struct flow_match_eth_addrs match; @@ -737,18 +749,6 @@ ocelot_flower_parse_key(struct ocelot *ocelot, int port, bool ingress, match_protocol = false; } - if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_VLAN)) { - struct flow_match_vlan match; - - flow_rule_match_vlan(rule, &match); - filter->key_type = OCELOT_VCAP_KEY_ANY; - filter->vlan.vid.value = match.key->vlan_id; - filter->vlan.vid.mask = match.mask->vlan_id; - filter->vlan.pcp.value[0] = match.key->vlan_priority; - filter->vlan.pcp.mask[0] = match.mask->vlan_priority; - match_protocol = false; - } - finished_key_parsing: if (match_protocol && proto != ETH_P_ALL) { if (filter->block_id == VCAP_ES0) { diff --git a/drivers/net/ethernet/mscc/ocelot_ptp.c b/drivers/net/ethernet/mscc/ocelot_ptp.c index 1a82f10c8853..2180ae94c744 100644 --- a/drivers/net/ethernet/mscc/ocelot_ptp.c +++ b/drivers/net/ethernet/mscc/ocelot_ptp.c @@ -335,8 +335,8 @@ static void ocelot_populate_ipv6_ptp_event_trap_key(struct ocelot_vcap_filter *trap) { trap->key_type = OCELOT_VCAP_KEY_IPV6; - trap->key.ipv4.proto.value[0] = IPPROTO_UDP; - trap->key.ipv4.proto.mask[0] = 0xff; + trap->key.ipv6.proto.value[0] = IPPROTO_UDP; + trap->key.ipv6.proto.mask[0] = 0xff; trap->key.ipv6.dport.value = PTP_EV_PORT; trap->key.ipv6.dport.mask = 0xffff; } @@ -355,8 +355,8 @@ static void ocelot_populate_ipv6_ptp_general_trap_key(struct ocelot_vcap_filter *trap) { trap->key_type = OCELOT_VCAP_KEY_IPV6; - trap->key.ipv4.proto.value[0] = IPPROTO_UDP; - trap->key.ipv4.proto.mask[0] = 0xff; + trap->key.ipv6.proto.value[0] = IPPROTO_UDP; + trap->key.ipv6.proto.mask[0] = 0xff; trap->key.ipv6.dport.value = PTP_GEN_PORT; trap->key.ipv6.dport.mask = 0xffff; } diff --git a/drivers/net/ethernet/netronome/nfp/Makefile b/drivers/net/ethernet/netronome/nfp/Makefile index c90d35f5ebca..808599b8066e 100644 --- a/drivers/net/ethernet/netronome/nfp/Makefile +++ b/drivers/net/ethernet/netronome/nfp/Makefile @@ -80,7 +80,7 @@ nfp-objs += \ abm/main.o endif -nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o +nfp-$(CONFIG_NFP_NET_IPSEC) += crypto/ipsec.o nfd3/ipsec.o nfdk/ipsec.o nfp-$(CONFIG_NFP_DEBUG) += nfp_net_debugfs.o diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c index b44263177981..b8bc89fc2540 100644 --- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c +++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c @@ -10,6 +10,7 @@ #include <linux/ktime.h> #include <net/xfrm.h> +#include "../nfpcore/nfp_dev.h" #include "../nfp_net_ctrl.h" #include "../nfp_net.h" #include "crypto.h" @@ -330,6 +331,10 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x, trunc_len = -1; break; case SADB_AALG_MD5HMAC: + if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported authentication algorithm"); + return -EINVAL; + } set_md5hmac(cfg, &trunc_len); break; case SADB_AALG_SHA1HMAC: @@ -373,6 +378,10 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x, cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_NULL; break; case SADB_EALG_3DESCBC: + if (nn->pdev->device == PCI_DEVICE_ID_NFP3800) { + NL_SET_ERR_MSG_MOD(extack, "Unsupported encryption algorithm for offload"); + return -EINVAL; + } cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC; cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_3DES; break; diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c index 861082c5dbff..59fb0583cc08 100644 --- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c @@ -192,10 +192,10 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, return 0; md_bytes = sizeof(meta_id) + - !!md_dst * NFP_NET_META_PORTID_SIZE + - !!tls_handle * NFP_NET_META_CONN_HANDLE_SIZE + - vlan_insert * NFP_NET_META_VLAN_SIZE + - *ipsec * NFP_NET_META_IPSEC_FIELD_SIZE; /* IPsec has 12 bytes of metadata */ + (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) + + (!!tls_handle ? NFP_NET_META_CONN_HANDLE_SIZE : 0) + + (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) + + (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0); if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; @@ -226,9 +226,6 @@ static int nfp_nfd3_prep_tx_meta(struct nfp_net_dp *dp, struct sk_buff *skb, meta_id |= NFP_NET_META_VLAN; } if (*ipsec) { - /* IPsec has three consecutive 4-bit IPsec metadata types, - * so in total IPsec has three 4 bytes of metadata. - */ data -= NFP_NET_META_IPSEC_SIZE; put_unaligned_be32(offload_info.seq_hi, data); data -= NFP_NET_META_IPSEC_SIZE; diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c index ccacb6ab6c39..d60c0e991a91 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c +++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c @@ -6,6 +6,7 @@ #include <linux/overflow.h> #include <linux/sizes.h> #include <linux/bitfield.h> +#include <net/xfrm.h> #include "../nfp_app.h" #include "../nfp_net.h" @@ -172,25 +173,32 @@ close_block: static int nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app, - struct sk_buff *skb) + struct sk_buff *skb, bool *ipsec) { struct metadata_dst *md_dst = skb_metadata_dst(skb); + struct nfp_ipsec_offload offload_info; unsigned char *data; bool vlan_insert; u32 meta_id = 0; int md_bytes; +#ifdef CONFIG_NFP_NET_IPSEC + if (xfrm_offload(skb)) + *ipsec = nfp_net_ipsec_tx_prep(dp, skb, &offload_info); +#endif + if (unlikely(md_dst && md_dst->type != METADATA_HW_PORT_MUX)) md_dst = NULL; vlan_insert = skb_vlan_tag_present(skb) && (dp->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2); - if (!(md_dst || vlan_insert)) + if (!(md_dst || vlan_insert || *ipsec)) return 0; md_bytes = sizeof(meta_id) + - !!md_dst * NFP_NET_META_PORTID_SIZE + - vlan_insert * NFP_NET_META_VLAN_SIZE; + (!!md_dst ? NFP_NET_META_PORTID_SIZE : 0) + + (vlan_insert ? NFP_NET_META_VLAN_SIZE : 0) + + (*ipsec ? NFP_NET_META_IPSEC_FIELD_SIZE : 0); if (unlikely(skb_cow_head(skb, md_bytes))) return -ENOMEM; @@ -212,6 +220,17 @@ nfp_nfdk_prep_tx_meta(struct nfp_net_dp *dp, struct nfp_app *app, meta_id |= NFP_NET_META_VLAN; } + if (*ipsec) { + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_hi, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.seq_low, data); + data -= NFP_NET_META_IPSEC_SIZE; + put_unaligned_be32(offload_info.handle - 1, data); + meta_id <<= NFP_NET_META_IPSEC_FIELD_SIZE; + meta_id |= NFP_NET_META_IPSEC << 8 | NFP_NET_META_IPSEC << 4 | NFP_NET_META_IPSEC; + } + meta_id = FIELD_PREP(NFDK_META_LEN, md_bytes) | FIELD_PREP(NFDK_META_FIELDS, meta_id); @@ -243,6 +262,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) struct nfp_net_dp *dp; int nr_frags, wr_idx; dma_addr_t dma_addr; + bool ipsec = false; u64 metadata; dp = &nn->dp; @@ -263,7 +283,7 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) return NETDEV_TX_BUSY; } - metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb); + metadata = nfp_nfdk_prep_tx_meta(dp, nn->app, skb, &ipsec); if (unlikely((int)metadata < 0)) goto err_flush; @@ -361,6 +381,9 @@ netdev_tx_t nfp_nfdk_tx(struct sk_buff *skb, struct net_device *netdev) (txd - 1)->dma_len_type = cpu_to_le16(dlen_type | NFDK_DESC_TX_EOP); + if (ipsec) + metadata = nfp_nfdk_ipsec_tx(metadata, skb); + if (!skb_is_gso(skb)) { real_len = skb->len; /* Metadata desc */ @@ -760,6 +783,15 @@ nfp_nfdk_parse_meta(struct net_device *netdev, struct nfp_meta_parsed *meta, return false; data += sizeof(struct nfp_net_tls_resync_req); break; +#ifdef CONFIG_NFP_NET_IPSEC + case NFP_NET_META_IPSEC: + /* Note: IPsec packet could have zero saidx, so need add 1 + * to indicate packet is IPsec packet within driver. + */ + meta->ipsec_saidx = get_unaligned_be32(data) + 1; + data += 4; + break; +#endif default: return true; } @@ -1186,6 +1218,13 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget) continue; } +#ifdef CONFIG_NFP_NET_IPSEC + if (meta.ipsec_saidx != 0 && unlikely(nfp_net_ipsec_rx(&meta, skb))) { + nfp_nfdk_rx_drop(dp, r_vec, rx_ring, NULL, skb); + continue; + } +#endif + if (meta_len_xdp) skb_metadata_set(skb, meta_len_xdp); diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c new file mode 100644 index 000000000000..58d8f59eb885 --- /dev/null +++ b/drivers/net/ethernet/netronome/nfp/nfdk/ipsec.c @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +/* Copyright (C) 2023 Corigine, Inc */ + +#include <net/xfrm.h> + +#include "../nfp_net.h" +#include "nfdk.h" + +u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb) +{ + struct xfrm_state *x = xfrm_input_state(skb); + + if (x->xso.dev && (x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)) + flags |= NFDK_DESC_TX_L3_CSUM | NFDK_DESC_TX_L4_CSUM; + + return flags; +} diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h index 0ea51d9f2325..fe55980348e9 100644 --- a/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h +++ b/drivers/net/ethernet/netronome/nfp/nfdk/nfdk.h @@ -125,4 +125,12 @@ nfp_nfdk_ctrl_tx_one(struct nfp_net *nn, struct nfp_net_r_vector *r_vec, void nfp_nfdk_ctrl_poll(struct tasklet_struct *t); void nfp_nfdk_rx_ring_fill_freelist(struct nfp_net_dp *dp, struct nfp_net_rx_ring *rx_ring); +#ifndef CONFIG_NFP_NET_IPSEC +static inline u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb) +{ + return flags; +} +#else +u64 nfp_nfdk_ipsec_tx(u64 flags, struct sk_buff *skb); +#endif #endif diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c index 18fc9971f1c8..e4825d885560 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c @@ -2529,10 +2529,15 @@ static void nfp_net_netdev_init(struct nfp_net *nn) netdev->features &= ~NETIF_F_HW_VLAN_STAG_RX; nn->dp.ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC; + if (nn->app && nn->app->type->id == NFP_APP_BPF_NIC) + netdev->xdp_features |= NETDEV_XDP_ACT_HW_OFFLOAD; + /* Finalise the netdev setup */ switch (nn->dp.ops->version) { case NFP_NFD_VER_NFD3: netdev->netdev_ops = &nfp_nfd3_netdev_ops; + netdev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY; break; case NFP_NFD_VER_NFDK: netdev->netdev_ops = &nfp_nfdk_netdev_ops; diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c index 807b86667bca..918319f965b3 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c +++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c @@ -293,35 +293,131 @@ nfp_net_set_fec_link_mode(struct nfp_eth_table_port *eth_port, } } -static const u16 nfp_eth_media_table[] = { - [NFP_MEDIA_1000BASE_CX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, - [NFP_MEDIA_1000BASE_KX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, - [NFP_MEDIA_10GBASE_KX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, - [NFP_MEDIA_10GBASE_KR] = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, - [NFP_MEDIA_10GBASE_CX4] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, - [NFP_MEDIA_10GBASE_CR] = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, - [NFP_MEDIA_10GBASE_SR] = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, - [NFP_MEDIA_10GBASE_ER] = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, - [NFP_MEDIA_25GBASE_KR] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, - [NFP_MEDIA_25GBASE_KR_S] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, - [NFP_MEDIA_25GBASE_CR] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, - [NFP_MEDIA_25GBASE_CR_S] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, - [NFP_MEDIA_25GBASE_SR] = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, - [NFP_MEDIA_40GBASE_CR4] = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, - [NFP_MEDIA_40GBASE_KR4] = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, - [NFP_MEDIA_40GBASE_SR4] = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, - [NFP_MEDIA_40GBASE_LR4] = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, - [NFP_MEDIA_50GBASE_KR] = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT, - [NFP_MEDIA_50GBASE_SR] = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT, - [NFP_MEDIA_50GBASE_CR] = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT, - [NFP_MEDIA_50GBASE_LR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, - [NFP_MEDIA_50GBASE_ER] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, - [NFP_MEDIA_50GBASE_FR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, - [NFP_MEDIA_100GBASE_KR4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, - [NFP_MEDIA_100GBASE_SR4] = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, - [NFP_MEDIA_100GBASE_CR4] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, - [NFP_MEDIA_100GBASE_KP4] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, - [NFP_MEDIA_100GBASE_CR10] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, +static const struct nfp_eth_media_link_mode { + u16 ethtool_link_mode; + u16 speed; +} nfp_eth_media_table[NFP_MEDIA_LINK_MODES_NUMBER] = { + [NFP_MEDIA_1000BASE_CX] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + .speed = NFP_SPEED_1G, + }, + [NFP_MEDIA_1000BASE_KX] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT, + .speed = NFP_SPEED_1G, + }, + [NFP_MEDIA_10GBASE_KX4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_10GBASE_KR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_10GBASE_CX4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_10GBASE_CR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_10GBASE_SR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_10GBASE_ER] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_10000baseER_Full_BIT, + .speed = NFP_SPEED_10G, + }, + [NFP_MEDIA_25GBASE_KR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_25GBASE_KR_S] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_25GBASE_CR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_25GBASE_CR_S] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_25GBASE_SR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT, + .speed = NFP_SPEED_25G, + }, + [NFP_MEDIA_40GBASE_CR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT, + .speed = NFP_SPEED_40G, + }, + [NFP_MEDIA_40GBASE_KR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT, + .speed = NFP_SPEED_40G, + }, + [NFP_MEDIA_40GBASE_SR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT, + .speed = NFP_SPEED_40G, + }, + [NFP_MEDIA_40GBASE_LR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT, + .speed = NFP_SPEED_40G, + }, + [NFP_MEDIA_50GBASE_KR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_50GBASE_SR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_50GBASE_CR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_50GBASE_LR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_50GBASE_ER] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_50GBASE_FR] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT, + .speed = NFP_SPEED_50G, + }, + [NFP_MEDIA_100GBASE_KR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, + .speed = NFP_SPEED_100G, + }, + [NFP_MEDIA_100GBASE_SR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT, + .speed = NFP_SPEED_100G, + }, + [NFP_MEDIA_100GBASE_CR4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, + .speed = NFP_SPEED_100G, + }, + [NFP_MEDIA_100GBASE_KP4] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT, + .speed = NFP_SPEED_100G, + }, + [NFP_MEDIA_100GBASE_CR10] = { + .ethtool_link_mode = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT, + .speed = NFP_SPEED_100G, + }, +}; + +static const unsigned int nfp_eth_speed_map[NFP_SUP_SPEED_NUMBER] = { + [NFP_SPEED_1G] = SPEED_1000, + [NFP_SPEED_10G] = SPEED_10000, + [NFP_SPEED_25G] = SPEED_25000, + [NFP_SPEED_40G] = SPEED_40000, + [NFP_SPEED_50G] = SPEED_50000, + [NFP_SPEED_100G] = SPEED_100000, }; static void nfp_add_media_link_mode(struct nfp_port *port, @@ -334,8 +430,12 @@ static void nfp_add_media_link_mode(struct nfp_port *port, }; struct nfp_cpp *cpp = port->app->cpp; - if (nfp_eth_read_media(cpp, ðm)) + if (nfp_eth_read_media(cpp, ðm)) { + bitmap_fill(port->speed_bitmap, NFP_SUP_SPEED_NUMBER); return; + } + + bitmap_zero(port->speed_bitmap, NFP_SUP_SPEED_NUMBER); for (u32 i = 0; i < 2; i++) { supported_modes[i] = le64_to_cpu(ethm.supported_modes[i]); @@ -344,20 +444,26 @@ static void nfp_add_media_link_mode(struct nfp_port *port, for (u32 i = 0; i < NFP_MEDIA_LINK_MODES_NUMBER; i++) { if (i < 64) { - if (supported_modes[0] & BIT_ULL(i)) - __set_bit(nfp_eth_media_table[i], + if (supported_modes[0] & BIT_ULL(i)) { + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, cmd->link_modes.supported); + __set_bit(nfp_eth_media_table[i].speed, + port->speed_bitmap); + } if (advertised_modes[0] & BIT_ULL(i)) - __set_bit(nfp_eth_media_table[i], + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, cmd->link_modes.advertising); } else { - if (supported_modes[1] & BIT_ULL(i - 64)) - __set_bit(nfp_eth_media_table[i], + if (supported_modes[1] & BIT_ULL(i - 64)) { + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, cmd->link_modes.supported); + __set_bit(nfp_eth_media_table[i].speed, + port->speed_bitmap); + } if (advertised_modes[1] & BIT_ULL(i - 64)) - __set_bit(nfp_eth_media_table[i], + __set_bit(nfp_eth_media_table[i].ethtool_link_mode, cmd->link_modes.advertising); } } @@ -468,6 +574,22 @@ nfp_net_set_link_ksettings(struct net_device *netdev, if (cmd->base.speed != SPEED_UNKNOWN) { u32 speed = cmd->base.speed / eth_port->lanes; + bool is_supported = false; + + for (u32 i = 0; i < NFP_SUP_SPEED_NUMBER; i++) { + if (cmd->base.speed == nfp_eth_speed_map[i] && + test_bit(i, port->speed_bitmap)) { + is_supported = true; + break; + } + } + + if (!is_supported) { + netdev_err(netdev, "Speed %u is not supported.\n", + cmd->base.speed); + err = -EINVAL; + goto err_bad_set; + } if (req_aneg) { netdev_err(netdev, "Speed changing is not allowed when working on autoneg mode.\n"); diff --git a/drivers/net/ethernet/netronome/nfp/nfp_port.h b/drivers/net/ethernet/netronome/nfp/nfp_port.h index f8cd157ca1d7..9c04f9f0e2c9 100644 --- a/drivers/net/ethernet/netronome/nfp/nfp_port.h +++ b/drivers/net/ethernet/netronome/nfp/nfp_port.h @@ -38,6 +38,16 @@ enum nfp_port_flags { NFP_PORT_CHANGED = 0, }; +enum { + NFP_SPEED_1G, + NFP_SPEED_10G, + NFP_SPEED_25G, + NFP_SPEED_40G, + NFP_SPEED_50G, + NFP_SPEED_100G, + NFP_SUP_SPEED_NUMBER +}; + /** * struct nfp_port - structure representing NFP port * @netdev: backpointer to associated netdev @@ -52,6 +62,7 @@ enum nfp_port_flags { * @eth_forced: for %NFP_PORT_PHYS_PORT port is forced UP or DOWN, don't change * @eth_port: for %NFP_PORT_PHYS_PORT translated ETH Table port entry * @eth_stats: for %NFP_PORT_PHYS_PORT MAC stats if available + * @speed_bitmap: for %NFP_PORT_PHYS_PORT supported speed bitmap * @pf_id: for %NFP_PORT_PF_PORT, %NFP_PORT_VF_PORT ID of the PCI PF (0-3) * @vf_id: for %NFP_PORT_VF_PORT ID of the PCI VF within @pf_id * @pf_split: for %NFP_PORT_PF_PORT %true if PCI PF has more than one vNIC @@ -78,6 +89,7 @@ struct nfp_port { bool eth_forced; struct nfp_eth_table_port *eth_port; u8 __iomem *eth_stats; + DECLARE_BITMAP(speed_bitmap, NFP_SUP_SPEED_NUMBER); }; /* NFP_PORT_PF_PORT, NFP_PORT_VF_PORT */ struct { diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c index 626b9113e7c4..d911f4fd9af6 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c @@ -708,9 +708,16 @@ void ionic_q_post(struct ionic_queue *q, bool ring_doorbell, ionic_desc_cb cb, q->lif->index, q->name, q->hw_type, q->hw_index, q->head_idx, ring_doorbell); - if (ring_doorbell) + if (ring_doorbell) { ionic_dbell_ring(lif->kern_dbpage, q->hw_type, q->dbval | q->head_idx); + + q->dbell_jiffies = jiffies; + + if (q_to_qcq(q)->napi_qcq) + mod_timer(&q_to_qcq(q)->napi_qcq->napi_deadline, + jiffies + IONIC_NAPI_DEADLINE); + } } static bool ionic_q_is_posted(struct ionic_queue *q, unsigned int pos) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h index 2a1d7b9c07e7..bce3ca38669b 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h @@ -25,6 +25,12 @@ #define IONIC_DEV_INFO_REG_COUNT 32 #define IONIC_DEV_CMD_REG_COUNT 32 +#define IONIC_NAPI_DEADLINE (HZ / 200) /* 5ms */ +#define IONIC_ADMIN_DOORBELL_DEADLINE (HZ / 2) /* 500ms */ +#define IONIC_TX_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ +#define IONIC_RX_MIN_DOORBELL_DEADLINE (HZ / 100) /* 10ms */ +#define IONIC_RX_MAX_DOORBELL_DEADLINE (HZ * 5) /* 5s */ + struct ionic_dev_bar { void __iomem *vaddr; phys_addr_t bus_addr; @@ -216,6 +222,8 @@ struct ionic_queue { struct ionic_lif *lif; struct ionic_desc_info *info; u64 dbval; + unsigned long dbell_deadline; + unsigned long dbell_jiffies; u16 head_idx; u16 tail_idx; unsigned int index; @@ -361,4 +369,8 @@ void ionic_q_service(struct ionic_queue *q, struct ionic_cq_info *cq_info, int ionic_heartbeat_check(struct ionic *ionic); bool ionic_is_fw_running(struct ionic_dev *idev); +bool ionic_adminq_poke_doorbell(struct ionic_queue *q); +bool ionic_txq_poke_doorbell(struct ionic_queue *q); +bool ionic_rxq_poke_doorbell(struct ionic_queue *q); + #endif /* _IONIC_DEV_H_ */ diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c index 4dd16c487f2b..63a78a9ac241 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c @@ -16,6 +16,7 @@ #include "ionic.h" #include "ionic_bus.h" +#include "ionic_dev.h" #include "ionic_lif.h" #include "ionic_txrx.h" #include "ionic_ethtool.h" @@ -200,6 +201,13 @@ void ionic_link_status_check_request(struct ionic_lif *lif, bool can_sleep) } } +static void ionic_napi_deadline(struct timer_list *timer) +{ + struct ionic_qcq *qcq = container_of(timer, struct ionic_qcq, napi_deadline); + + napi_schedule(&qcq->napi); +} + static irqreturn_t ionic_isr(int irq, void *data) { struct napi_struct *napi = data; @@ -269,6 +277,7 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq) .oper = IONIC_Q_ENABLE, }, }; + int ret; idev = &lif->ionic->idev; dev = lif->ionic->dev; @@ -276,16 +285,24 @@ static int ionic_qcq_enable(struct ionic_qcq *qcq) dev_dbg(dev, "q_enable.index %d q_enable.qtype %d\n", ctx.cmd.q_control.index, ctx.cmd.q_control.type); + if (qcq->flags & IONIC_QCQ_F_INTR) + ionic_intr_clean(idev->intr_ctrl, qcq->intr.index); + + ret = ionic_adminq_post_wait(lif, &ctx); + if (ret) + return ret; + + if (qcq->napi.poll) + napi_enable(&qcq->napi); + if (qcq->flags & IONIC_QCQ_F_INTR) { irq_set_affinity_hint(qcq->intr.vector, &qcq->intr.affinity_mask); - napi_enable(&qcq->napi); - ionic_intr_clean(idev->intr_ctrl, qcq->intr.index); ionic_intr_mask(idev->intr_ctrl, qcq->intr.index, IONIC_INTR_MASK_CLEAR); } - return ionic_adminq_post_wait(lif, &ctx); + return 0; } static int ionic_qcq_disable(struct ionic_lif *lif, struct ionic_qcq *qcq, int fw_err) @@ -316,6 +333,7 @@ static int ionic_qcq_disable(struct ionic_lif *lif, struct ionic_qcq *qcq, int f synchronize_irq(qcq->intr.vector); irq_set_affinity_hint(qcq->intr.vector, NULL); napi_disable(&qcq->napi); + del_timer_sync(&qcq->napi_deadline); } /* If there was a previous fw communcation error, don't bother with @@ -451,6 +469,7 @@ static void ionic_link_qcq_interrupts(struct ionic_qcq *src_qcq, n_qcq->intr.vector = src_qcq->intr.vector; n_qcq->intr.index = src_qcq->intr.index; + n_qcq->napi_qcq = src_qcq->napi_qcq; } static int ionic_alloc_qcq_interrupt(struct ionic_lif *lif, struct ionic_qcq *qcq) @@ -564,13 +583,15 @@ static int ionic_qcq_alloc(struct ionic_lif *lif, unsigned int type, } if (flags & IONIC_QCQ_F_NOTIFYQ) { - int q_size, cq_size; + int q_size; - /* q & cq need to be contiguous in case of notifyq */ + /* q & cq need to be contiguous in NotifyQ, so alloc it all in q + * and don't alloc qc. We leave new->qc_size and new->qc_base + * as 0 to be sure we don't try to free it later. + */ q_size = ALIGN(num_descs * desc_size, PAGE_SIZE); - cq_size = ALIGN(num_descs * cq_desc_size, PAGE_SIZE); - - new->q_size = PAGE_SIZE + q_size + cq_size; + new->q_size = PAGE_SIZE + q_size + + ALIGN(num_descs * cq_desc_size, PAGE_SIZE); new->q_base = dma_alloc_coherent(dev, new->q_size, &new->q_base_pa, GFP_KERNEL); if (!new->q_base) { @@ -773,8 +794,14 @@ static int ionic_lif_txq_init(struct ionic_lif *lif, struct ionic_qcq *qcq) dev_dbg(dev, "txq->hw_type %d\n", q->hw_type); dev_dbg(dev, "txq->hw_index %d\n", q->hw_index); - if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) + q->dbell_deadline = IONIC_TX_DOORBELL_DEADLINE; + q->dbell_jiffies = jiffies; + + if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) { netif_napi_add(lif->netdev, &qcq->napi, ionic_tx_napi); + qcq->napi_qcq = qcq; + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); + } qcq->flags |= IONIC_QCQ_F_INITED; @@ -828,11 +855,17 @@ static int ionic_lif_rxq_init(struct ionic_lif *lif, struct ionic_qcq *qcq) dev_dbg(dev, "rxq->hw_type %d\n", q->hw_type); dev_dbg(dev, "rxq->hw_index %d\n", q->hw_index); + q->dbell_deadline = IONIC_RX_MIN_DOORBELL_DEADLINE; + q->dbell_jiffies = jiffies; + if (test_bit(IONIC_LIF_F_SPLIT_INTR, lif->state)) netif_napi_add(lif->netdev, &qcq->napi, ionic_rx_napi); else netif_napi_add(lif->netdev, &qcq->napi, ionic_txrx_napi); + qcq->napi_qcq = qcq; + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); + qcq->flags |= IONIC_QCQ_F_INITED; return 0; @@ -1150,6 +1183,7 @@ static int ionic_adminq_napi(struct napi_struct *napi, int budget) struct ionic_dev *idev = &lif->ionic->idev; unsigned long irqflags; unsigned int flags = 0; + bool resched = false; int rx_work = 0; int tx_work = 0; int n_work = 0; @@ -1187,6 +1221,16 @@ static int ionic_adminq_napi(struct napi_struct *napi, int budget) ionic_intr_credits(idev->intr_ctrl, intr->index, credits, flags); } + if (!a_work && ionic_adminq_poke_doorbell(&lif->adminqcq->q)) + resched = true; + if (lif->hwstamp_rxq && !rx_work && ionic_rxq_poke_doorbell(&lif->hwstamp_rxq->q)) + resched = true; + if (lif->hwstamp_txq && !tx_work && ionic_txq_poke_doorbell(&lif->hwstamp_txq->q)) + resched = true; + if (resched) + mod_timer(&lif->adminqcq->napi_deadline, + jiffies + IONIC_NAPI_DEADLINE); + return work_done; } @@ -3245,8 +3289,14 @@ static int ionic_lif_adminq_init(struct ionic_lif *lif) dev_dbg(dev, "adminq->hw_type %d\n", q->hw_type); dev_dbg(dev, "adminq->hw_index %d\n", q->hw_index); + q->dbell_deadline = IONIC_ADMIN_DOORBELL_DEADLINE; + q->dbell_jiffies = jiffies; + netif_napi_add(lif->netdev, &qcq->napi, ionic_adminq_napi); + qcq->napi_qcq = qcq; + timer_setup(&qcq->napi_deadline, ionic_napi_deadline, 0); + napi_enable(&qcq->napi); if (qcq->flags & IONIC_QCQ_F_INTR) diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.h b/drivers/net/ethernet/pensando/ionic/ionic_lif.h index a53984bf3544..734519895614 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_lif.h +++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.h @@ -74,8 +74,10 @@ struct ionic_qcq { struct ionic_queue q; struct ionic_cq cq; struct ionic_intr_info intr; + struct timer_list napi_deadline; struct napi_struct napi; unsigned int flags; + struct ionic_qcq *napi_qcq; struct dentry *dentry; }; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c index a13530ec4dd8..08c42b039d92 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_main.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c @@ -289,6 +289,35 @@ static void ionic_adminq_cb(struct ionic_queue *q, complete_all(&ctx->work); } +bool ionic_adminq_poke_doorbell(struct ionic_queue *q) +{ + struct ionic_lif *lif = q->lif; + unsigned long now, then, dif; + unsigned long irqflags; + + spin_lock_irqsave(&lif->adminq_lock, irqflags); + + if (q->tail_idx == q->head_idx) { + spin_unlock_irqrestore(&lif->adminq_lock, irqflags); + return false; + } + + now = READ_ONCE(jiffies); + then = q->dbell_jiffies; + dif = now - then; + + if (dif > q->dbell_deadline) { + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, + q->dbval | q->head_idx); + + q->dbell_jiffies = now; + } + + spin_unlock_irqrestore(&lif->adminq_lock, irqflags); + + return true; +} + int ionic_adminq_post(struct ionic_lif *lif, struct ionic_admin_ctx *ctx) { struct ionic_desc_info *desc_info; diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c index 0c3977416cd1..f761780f0162 100644 --- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c @@ -22,6 +22,67 @@ static inline void ionic_rxq_post(struct ionic_queue *q, bool ring_dbell, ionic_q_post(q, ring_dbell, cb_func, cb_arg); } +bool ionic_txq_poke_doorbell(struct ionic_queue *q) +{ + unsigned long now, then, dif; + struct netdev_queue *netdev_txq; + struct net_device *netdev; + + netdev = q->lif->netdev; + netdev_txq = netdev_get_tx_queue(netdev, q->index); + + HARD_TX_LOCK(netdev, netdev_txq, smp_processor_id()); + + if (q->tail_idx == q->head_idx) { + HARD_TX_UNLOCK(netdev, netdev_txq); + return false; + } + + now = READ_ONCE(jiffies); + then = q->dbell_jiffies; + dif = now - then; + + if (dif > q->dbell_deadline) { + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, + q->dbval | q->head_idx); + + q->dbell_jiffies = now; + } + + HARD_TX_UNLOCK(netdev, netdev_txq); + + return true; +} + +bool ionic_rxq_poke_doorbell(struct ionic_queue *q) +{ + unsigned long now, then, dif; + + /* no lock, called from rx napi or txrx napi, nothing else can fill */ + + if (q->tail_idx == q->head_idx) + return false; + + now = READ_ONCE(jiffies); + then = q->dbell_jiffies; + dif = now - then; + + if (dif > q->dbell_deadline) { + ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, + q->dbval | q->head_idx); + + q->dbell_jiffies = now; + + dif = 2 * q->dbell_deadline; + if (dif > IONIC_RX_MAX_DOORBELL_DEADLINE) + dif = IONIC_RX_MAX_DOORBELL_DEADLINE; + + q->dbell_deadline = dif; + } + + return true; +} + static inline struct netdev_queue *q_to_ndq(struct ionic_queue *q) { return netdev_get_tx_queue(q->lif->netdev, q->index); @@ -424,6 +485,12 @@ void ionic_rx_fill(struct ionic_queue *q) ionic_dbell_ring(q->lif->kern_dbpage, q->hw_type, q->dbval | q->head_idx); + + q->dbell_deadline = IONIC_RX_MIN_DOORBELL_DEADLINE; + q->dbell_jiffies = jiffies; + + mod_timer(&q_to_qcq(q)->napi_qcq->napi_deadline, + jiffies + IONIC_NAPI_DEADLINE); } void ionic_rx_empty(struct ionic_queue *q) @@ -511,6 +578,9 @@ int ionic_tx_napi(struct napi_struct *napi, int budget) work_done, flags); } + if (!work_done && ionic_txq_poke_doorbell(&qcq->q)) + mod_timer(&qcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); + return work_done; } @@ -544,23 +614,29 @@ int ionic_rx_napi(struct napi_struct *napi, int budget) work_done, flags); } + if (!work_done && ionic_rxq_poke_doorbell(&qcq->q)) + mod_timer(&qcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); + return work_done; } int ionic_txrx_napi(struct napi_struct *napi, int budget) { - struct ionic_qcq *qcq = napi_to_qcq(napi); + struct ionic_qcq *rxqcq = napi_to_qcq(napi); struct ionic_cq *rxcq = napi_to_cq(napi); unsigned int qi = rxcq->bound_q->index; + struct ionic_qcq *txqcq; struct ionic_dev *idev; struct ionic_lif *lif; struct ionic_cq *txcq; + bool resched = false; u32 rx_work_done = 0; u32 tx_work_done = 0; u32 flags = 0; lif = rxcq->bound_q->lif; idev = &lif->ionic->idev; + txqcq = lif->txqcqs[qi]; txcq = &lif->txqcqs[qi]->cq; tx_work_done = ionic_cq_service(txcq, IONIC_TX_BUDGET_DEFAULT, @@ -572,7 +648,7 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget) ionic_rx_fill(rxcq->bound_q); if (rx_work_done < budget && napi_complete_done(napi, rx_work_done)) { - ionic_dim_update(qcq, 0); + ionic_dim_update(rxqcq, 0); flags |= IONIC_INTR_CRED_UNMASK; rxcq->bound_intr->rearm_count++; } @@ -583,6 +659,13 @@ int ionic_txrx_napi(struct napi_struct *napi, int budget) tx_work_done + rx_work_done, flags); } + if (!rx_work_done && ionic_rxq_poke_doorbell(&rxqcq->q)) + resched = true; + if (!tx_work_done && ionic_txq_poke_doorbell(&txqcq->q)) + resched = true; + if (resched) + mod_timer(&rxqcq->napi_deadline, jiffies + IONIC_NAPI_DEADLINE); + return rx_work_done; } diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c index 953f304b8588..b6d999927e86 100644 --- a/drivers/net/ethernet/qlogic/qede/qede_main.c +++ b/drivers/net/ethernet/qlogic/qede/qede_main.c @@ -892,6 +892,9 @@ static void qede_init_ndev(struct qede_dev *edev) ndev->hw_features = hw_features; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + /* MTU range: 46 - 9600 */ ndev->min_mtu = ETH_ZLEN - ETH_HLEN; ndev->max_mtu = QEDE_MAX_JUMBO_PACKET_SIZE; diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c index 50b61a0a7f53..853394e5bb8b 100644 --- a/drivers/net/ethernet/renesas/rswitch.c +++ b/drivers/net/ethernet/renesas/rswitch.c @@ -123,13 +123,6 @@ static void rswitch_fwd_init(struct rswitch_private *priv) iowrite32(GENMASK(RSWITCH_NUM_PORTS - 1, 0), priv->addr + FWPBFC(priv->gwca.index)); } -/* gPTP timer (gPTP) */ -static void rswitch_get_timestamp(struct rswitch_private *priv, - struct timespec64 *ts) -{ - priv->ptp_priv->info.gettime64(&priv->ptp_priv->info, ts); -} - /* Gateway CPU agent block (GWCA) */ static int rswitch_gwca_change_mode(struct rswitch_private *priv, enum rswitch_gwca_mode mode) @@ -240,7 +233,7 @@ static int rswitch_get_num_cur_queues(struct rswitch_gwca_queue *gq) static bool rswitch_is_queue_rxed(struct rswitch_gwca_queue *gq) { - struct rswitch_ext_ts_desc *desc = &gq->ts_ring[gq->dirty]; + struct rswitch_ext_ts_desc *desc = &gq->rx_ring[gq->dirty]; if ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) return true; @@ -280,36 +273,43 @@ static void rswitch_gwca_queue_free(struct net_device *ndev, { int i; - if (gq->gptp) { + if (!gq->dir_tx) { dma_free_coherent(ndev->dev.parent, sizeof(struct rswitch_ext_ts_desc) * - (gq->ring_size + 1), gq->ts_ring, gq->ring_dma); - gq->ts_ring = NULL; - } else { - dma_free_coherent(ndev->dev.parent, - sizeof(struct rswitch_ext_desc) * - (gq->ring_size + 1), gq->ring, gq->ring_dma); - gq->ring = NULL; - } + (gq->ring_size + 1), gq->rx_ring, gq->ring_dma); + gq->rx_ring = NULL; - if (!gq->dir_tx) { for (i = 0; i < gq->ring_size; i++) dev_kfree_skb(gq->skbs[i]); + } else { + dma_free_coherent(ndev->dev.parent, + sizeof(struct rswitch_ext_desc) * + (gq->ring_size + 1), gq->tx_ring, gq->ring_dma); + gq->tx_ring = NULL; } kfree(gq->skbs); gq->skbs = NULL; } +static void rswitch_gwca_ts_queue_free(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + + dma_free_coherent(&priv->pdev->dev, + sizeof(struct rswitch_ts_desc) * (gq->ring_size + 1), + gq->ts_ring, gq->ring_dma); + gq->ts_ring = NULL; +} + static int rswitch_gwca_queue_alloc(struct net_device *ndev, struct rswitch_private *priv, struct rswitch_gwca_queue *gq, - bool dir_tx, bool gptp, int ring_size) + bool dir_tx, int ring_size) { int i, bit; gq->dir_tx = dir_tx; - gq->gptp = gptp; gq->ring_size = ring_size; gq->ndev = ndev; @@ -317,18 +317,19 @@ static int rswitch_gwca_queue_alloc(struct net_device *ndev, if (!gq->skbs) return -ENOMEM; - if (!dir_tx) + if (!dir_tx) { rswitch_gwca_queue_alloc_skb(gq, 0, gq->ring_size); - if (gptp) - gq->ts_ring = dma_alloc_coherent(ndev->dev.parent, + gq->rx_ring = dma_alloc_coherent(ndev->dev.parent, sizeof(struct rswitch_ext_ts_desc) * (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); - else - gq->ring = dma_alloc_coherent(ndev->dev.parent, - sizeof(struct rswitch_ext_desc) * - (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); - if (!gq->ts_ring && !gq->ring) + } else { + gq->tx_ring = dma_alloc_coherent(ndev->dev.parent, + sizeof(struct rswitch_ext_desc) * + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); + } + + if (!gq->rx_ring && !gq->tx_ring) goto out; i = gq->index / 32; @@ -346,6 +347,17 @@ out: return -ENOMEM; } +static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + + gq->ring_size = TS_RING_SIZE; + gq->ts_ring = dma_alloc_coherent(&priv->pdev->dev, + sizeof(struct rswitch_ts_desc) * + (gq->ring_size + 1), &gq->ring_dma, GFP_KERNEL); + return !gq->ts_ring ? -ENOMEM : 0; +} + static void rswitch_desc_set_dptr(struct rswitch_desc *desc, dma_addr_t addr) { desc->dptrl = cpu_to_le32(lower_32_bits(addr)); @@ -361,14 +373,14 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, struct rswitch_private *priv, struct rswitch_gwca_queue *gq) { - int tx_ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size; + int ring_size = sizeof(struct rswitch_ext_desc) * gq->ring_size; struct rswitch_ext_desc *desc; struct rswitch_desc *linkfix; dma_addr_t dma_addr; int i; - memset(gq->ring, 0, tx_ring_size); - for (i = 0, desc = gq->ring; i < gq->ring_size; i++, desc++) { + memset(gq->tx_ring, 0, ring_size); + for (i = 0, desc = gq->tx_ring; i < gq->ring_size; i++, desc++) { if (!gq->dir_tx) { dma_addr = dma_map_single(ndev->dev.parent, gq->skbs[i]->data, PKT_BUF_SZ, @@ -386,7 +398,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); desc->desc.die_dt = DT_LINKFIX; - linkfix = &priv->linkfix_table[gq->index]; + linkfix = &priv->gwca.linkfix_table[gq->index]; linkfix->die_dt = DT_LINKFIX; rswitch_desc_set_dptr(linkfix, gq->ring_dma); @@ -397,7 +409,7 @@ static int rswitch_gwca_queue_format(struct net_device *ndev, err: if (!gq->dir_tx) { - for (i--, desc = gq->ring; i >= 0; i--, desc++) { + for (i--, desc = gq->tx_ring; i >= 0; i--, desc++) { dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ, DMA_FROM_DEVICE); @@ -407,9 +419,23 @@ err: return -ENOMEM; } -static int rswitch_gwca_queue_ts_fill(struct net_device *ndev, - struct rswitch_gwca_queue *gq, - int start_index, int num) +static void rswitch_gwca_ts_queue_fill(struct rswitch_private *priv, + int start_index, int num) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + struct rswitch_ts_desc *desc; + int i, index; + + for (i = 0; i < num; i++) { + index = (i + start_index) % gq->ring_size; + desc = &gq->ts_ring[index]; + desc->desc.die_dt = DT_FEMPTY_ND | DIE; + } +} + +static int rswitch_gwca_queue_ext_ts_fill(struct net_device *ndev, + struct rswitch_gwca_queue *gq, + int start_index, int num) { struct rswitch_device *rdev = netdev_priv(ndev); struct rswitch_ext_ts_desc *desc; @@ -418,7 +444,7 @@ static int rswitch_gwca_queue_ts_fill(struct net_device *ndev, for (i = 0; i < num; i++) { index = (i + start_index) % gq->ring_size; - desc = &gq->ts_ring[index]; + desc = &gq->rx_ring[index]; if (!gq->dir_tx) { dma_addr = dma_map_single(ndev->dev.parent, gq->skbs[index]->data, PKT_BUF_SZ, @@ -442,7 +468,7 @@ err: if (!gq->dir_tx) { for (i--; i >= 0; i--) { index = (i + start_index) % gq->ring_size; - desc = &gq->ts_ring[index]; + desc = &gq->rx_ring[index]; dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, PKT_BUF_SZ, DMA_FROM_DEVICE); @@ -452,25 +478,25 @@ err: return -ENOMEM; } -static int rswitch_gwca_queue_ts_format(struct net_device *ndev, - struct rswitch_private *priv, - struct rswitch_gwca_queue *gq) +static int rswitch_gwca_queue_ext_ts_format(struct net_device *ndev, + struct rswitch_private *priv, + struct rswitch_gwca_queue *gq) { - int tx_ts_ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size; + int ring_size = sizeof(struct rswitch_ext_ts_desc) * gq->ring_size; struct rswitch_ext_ts_desc *desc; struct rswitch_desc *linkfix; int err; - memset(gq->ts_ring, 0, tx_ts_ring_size); - err = rswitch_gwca_queue_ts_fill(ndev, gq, 0, gq->ring_size); + memset(gq->rx_ring, 0, ring_size); + err = rswitch_gwca_queue_ext_ts_fill(ndev, gq, 0, gq->ring_size); if (err < 0) return err; - desc = &gq->ts_ring[gq->ring_size]; /* Last */ + desc = &gq->rx_ring[gq->ring_size]; /* Last */ rswitch_desc_set_dptr(&desc->desc, gq->ring_dma); desc->desc.die_dt = DT_LINKFIX; - linkfix = &priv->linkfix_table[gq->index]; + linkfix = &priv->gwca.linkfix_table[gq->index]; linkfix->die_dt = DT_LINKFIX; rswitch_desc_set_dptr(linkfix, gq->ring_dma); @@ -480,28 +506,31 @@ static int rswitch_gwca_queue_ts_format(struct net_device *ndev, return 0; } -static int rswitch_gwca_desc_alloc(struct rswitch_private *priv) +static int rswitch_gwca_linkfix_alloc(struct rswitch_private *priv) { int i, num_queues = priv->gwca.num_queues; + struct rswitch_gwca *gwca = &priv->gwca; struct device *dev = &priv->pdev->dev; - priv->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues; - priv->linkfix_table = dma_alloc_coherent(dev, priv->linkfix_table_size, - &priv->linkfix_table_dma, GFP_KERNEL); - if (!priv->linkfix_table) + gwca->linkfix_table_size = sizeof(struct rswitch_desc) * num_queues; + gwca->linkfix_table = dma_alloc_coherent(dev, gwca->linkfix_table_size, + &gwca->linkfix_table_dma, GFP_KERNEL); + if (!gwca->linkfix_table) return -ENOMEM; for (i = 0; i < num_queues; i++) - priv->linkfix_table[i].die_dt = DT_EOS; + gwca->linkfix_table[i].die_dt = DT_EOS; return 0; } -static void rswitch_gwca_desc_free(struct rswitch_private *priv) +static void rswitch_gwca_linkfix_free(struct rswitch_private *priv) { - if (priv->linkfix_table) - dma_free_coherent(&priv->pdev->dev, priv->linkfix_table_size, - priv->linkfix_table, priv->linkfix_table_dma); - priv->linkfix_table = NULL; + struct rswitch_gwca *gwca = &priv->gwca; + + if (gwca->linkfix_table) + dma_free_coherent(&priv->pdev->dev, gwca->linkfix_table_size, + gwca->linkfix_table, gwca->linkfix_table_dma); + gwca->linkfix_table = NULL; } static struct rswitch_gwca_queue *rswitch_gwca_get(struct rswitch_private *priv) @@ -536,8 +565,7 @@ static int rswitch_txdmac_alloc(struct net_device *ndev) if (!rdev->tx_queue) return -EBUSY; - err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, false, - TX_RING_SIZE); + err = rswitch_gwca_queue_alloc(ndev, priv, rdev->tx_queue, true, TX_RING_SIZE); if (err < 0) { rswitch_gwca_put(priv, rdev->tx_queue); return err; @@ -571,8 +599,7 @@ static int rswitch_rxdmac_alloc(struct net_device *ndev) if (!rdev->rx_queue) return -EBUSY; - err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, true, - RX_RING_SIZE); + err = rswitch_gwca_queue_alloc(ndev, priv, rdev->rx_queue, false, RX_RING_SIZE); if (err < 0) { rswitch_gwca_put(priv, rdev->rx_queue); return err; @@ -594,7 +621,7 @@ static int rswitch_rxdmac_init(struct rswitch_private *priv, int index) struct rswitch_device *rdev = priv->rdev[index]; struct net_device *ndev = rdev->ndev; - return rswitch_gwca_queue_ts_format(ndev, priv, rdev->rx_queue); + return rswitch_gwca_queue_ext_ts_format(ndev, priv, rdev->rx_queue); } static int rswitch_gwca_hw_init(struct rswitch_private *priv) @@ -617,8 +644,11 @@ static int rswitch_gwca_hw_init(struct rswitch_private *priv) iowrite32(GWVCC_VEM_SC_TAG, priv->addr + GWVCC); iowrite32(0, priv->addr + GWTTFC); - iowrite32(lower_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC1); - iowrite32(upper_32_bits(priv->linkfix_table_dma), priv->addr + GWDCBAC0); + iowrite32(lower_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC1); + iowrite32(upper_32_bits(priv->gwca.linkfix_table_dma), priv->addr + GWDCBAC0); + iowrite32(lower_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC10); + iowrite32(upper_32_bits(priv->gwca.ts_queue.ring_dma), priv->addr + GWTDCAC00); + iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDCC0); rswitch_gwca_set_rate_limit(priv, priv->gwca.speed); for (i = 0; i < RSWITCH_NUM_PORTS; i++) { @@ -675,7 +705,7 @@ static bool rswitch_rx(struct net_device *ndev, int *quota) boguscnt = min_t(int, gq->ring_size, *quota); limit = boguscnt; - desc = &gq->ts_ring[gq->cur]; + desc = &gq->rx_ring[gq->cur]; while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY) { if (--boguscnt < 0) break; @@ -703,14 +733,14 @@ static bool rswitch_rx(struct net_device *ndev, int *quota) rdev->ndev->stats.rx_bytes += pkt_len; gq->cur = rswitch_next_queue_index(gq, true, 1); - desc = &gq->ts_ring[gq->cur]; + desc = &gq->rx_ring[gq->cur]; } num = rswitch_get_num_cur_queues(gq); ret = rswitch_gwca_queue_alloc_skb(gq, gq->dirty, num); if (ret < 0) goto err; - ret = rswitch_gwca_queue_ts_fill(ndev, gq, gq->dirty, num); + ret = rswitch_gwca_queue_ext_ts_fill(ndev, gq, gq->dirty, num); if (ret < 0) goto err; gq->dirty = rswitch_next_queue_index(gq, false, num); @@ -737,7 +767,7 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only) for (; rswitch_get_num_cur_queues(gq) > 0; gq->dirty = rswitch_next_queue_index(gq, false, 1)) { - desc = &gq->ring[gq->dirty]; + desc = &gq->tx_ring[gq->dirty]; if (free_txed_only && (desc->desc.die_dt & DT_MASK) != DT_FEMPTY) break; @@ -745,15 +775,6 @@ static int rswitch_tx_free(struct net_device *ndev, bool free_txed_only) size = le16_to_cpu(desc->desc.info_ds) & TX_DS; skb = gq->skbs[gq->dirty]; if (skb) { - if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { - struct skb_shared_hwtstamps shhwtstamps; - struct timespec64 ts; - - rswitch_get_timestamp(rdev->priv, &ts); - memset(&shhwtstamps, 0, sizeof(shhwtstamps)); - shhwtstamps.hwtstamp = timespec64_to_ktime(ts); - skb_tstamp_tx(skb, &shhwtstamps); - } dma_addr = rswitch_desc_get_dptr(&desc->desc); dma_unmap_single(ndev->dev.parent, dma_addr, size, DMA_TO_DEVICE); @@ -879,6 +900,73 @@ static int rswitch_gwca_request_irqs(struct rswitch_private *priv) return 0; } +static void rswitch_ts(struct rswitch_private *priv) +{ + struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue; + struct rswitch_gwca_ts_info *ts_info, *ts_info2; + struct skb_shared_hwtstamps shhwtstamps; + struct rswitch_ts_desc *desc; + struct timespec64 ts; + u32 tag, port; + int num; + + desc = &gq->ts_ring[gq->cur]; + while ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY_ND) { + dma_rmb(); + + port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl)); + tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl)); + + list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) { + if (!(ts_info->port == port && ts_info->tag == tag)) + continue; + + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); + ts.tv_sec = __le32_to_cpu(desc->ts_sec); + ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff)); + shhwtstamps.hwtstamp = timespec64_to_ktime(ts); + skb_tstamp_tx(ts_info->skb, &shhwtstamps); + dev_consume_skb_irq(ts_info->skb); + list_del(&ts_info->list); + kfree(ts_info); + break; + } + + gq->cur = rswitch_next_queue_index(gq, true, 1); + desc = &gq->ts_ring[gq->cur]; + } + + num = rswitch_get_num_cur_queues(gq); + rswitch_gwca_ts_queue_fill(priv, gq->dirty, num); + gq->dirty = rswitch_next_queue_index(gq, false, num); +} + +static irqreturn_t rswitch_gwca_ts_irq(int irq, void *dev_id) +{ + struct rswitch_private *priv = dev_id; + + if (ioread32(priv->addr + GWTSDIS) & GWCA_TS_IRQ_BIT) { + iowrite32(GWCA_TS_IRQ_BIT, priv->addr + GWTSDIS); + rswitch_ts(priv); + + return IRQ_HANDLED; + } + + return IRQ_NONE; +} + +static int rswitch_gwca_ts_request_irqs(struct rswitch_private *priv) +{ + int irq; + + irq = platform_get_irq_byname(priv->pdev, GWCA_TS_IRQ_RESOURCE_NAME); + if (irq < 0) + return irq; + + return devm_request_irq(&priv->pdev->dev, irq, rswitch_gwca_ts_irq, + 0, GWCA_TS_IRQ_NAME, priv); +} + /* Ethernet TSN Agent block (ETHA) and Ethernet MAC IP block (RMAC) */ static int rswitch_etha_change_mode(struct rswitch_etha *etha, enum rswitch_etha_mode mode) @@ -1349,15 +1437,28 @@ static int rswitch_open(struct net_device *ndev) rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true); rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true); + iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE); + return 0; }; static int rswitch_stop(struct net_device *ndev) { struct rswitch_device *rdev = netdev_priv(ndev); + struct rswitch_gwca_ts_info *ts_info, *ts_info2; netif_tx_stop_all_queues(ndev); + iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID); + + list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) { + if (ts_info->port != rdev->port) + continue; + dev_kfree_skb_irq(ts_info->skb); + list_del(&ts_info->list); + kfree(ts_info); + } + rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false); rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false); @@ -1390,17 +1491,31 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd } gq->skbs[gq->cur] = skb; - desc = &gq->ring[gq->cur]; + desc = &gq->tx_ring[gq->cur]; rswitch_desc_set_dptr(&desc->desc, dma_addr); desc->desc.info_ds = cpu_to_le16(skb->len); desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) | INFO1_FMT); if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) { + struct rswitch_gwca_ts_info *ts_info; + + ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC); + if (!ts_info) { + dma_unmap_single(ndev->dev.parent, dma_addr, skb->len, DMA_TO_DEVICE); + return -ENOMEM; + } + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; rdev->ts_tag++; desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC); + + ts_info->skb = skb_get(skb); + ts_info->port = rdev->port; + ts_info->tag = rdev->ts_tag; + list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list); + + skb_tx_timestamp(skb); } - skb_tx_timestamp(skb); dma_wmb(); @@ -1650,10 +1765,17 @@ static int rswitch_init(struct rswitch_private *priv) if (err < 0) return err; - err = rswitch_gwca_desc_alloc(priv); + err = rswitch_gwca_linkfix_alloc(priv); if (err < 0) return -ENOMEM; + err = rswitch_gwca_ts_queue_alloc(priv); + if (err < 0) + goto err_ts_queue_alloc; + + rswitch_gwca_ts_queue_fill(priv, 0, TS_RING_SIZE); + INIT_LIST_HEAD(&priv->gwca.ts_info_list); + for (i = 0; i < RSWITCH_NUM_PORTS; i++) { err = rswitch_device_alloc(priv, i); if (err < 0) { @@ -1674,6 +1796,10 @@ static int rswitch_init(struct rswitch_private *priv) if (err < 0) goto err_gwca_request_irq; + err = rswitch_gwca_ts_request_irqs(priv); + if (err < 0) + goto err_gwca_ts_request_irq; + err = rswitch_gwca_hw_init(priv); if (err < 0) goto err_gwca_hw_init; @@ -1704,6 +1830,7 @@ err_ether_port_init_all: rswitch_gwca_hw_deinit(priv); err_gwca_hw_init: +err_gwca_ts_request_irq: err_gwca_request_irq: rcar_gen4_ptp_unregister(priv->ptp_priv); @@ -1712,7 +1839,10 @@ err_ptp_register: rswitch_device_free(priv, i); err_device_alloc: - rswitch_gwca_desc_free(priv); + rswitch_gwca_ts_queue_free(priv); + +err_ts_queue_alloc: + rswitch_gwca_linkfix_free(priv); return err; } @@ -1791,7 +1921,8 @@ static void rswitch_deinit(struct rswitch_private *priv) rswitch_device_free(priv, i); } - rswitch_gwca_desc_free(priv); + rswitch_gwca_ts_queue_free(priv); + rswitch_gwca_linkfix_free(priv); rswitch_clock_disable(priv); } diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h index 59830ab91a69..27d3d38c055f 100644 --- a/drivers/net/ethernet/renesas/rswitch.h +++ b/drivers/net/ethernet/renesas/rswitch.h @@ -27,6 +27,7 @@ #define TX_RING_SIZE 1024 #define RX_RING_SIZE 1024 +#define TS_RING_SIZE (TX_RING_SIZE * RSWITCH_NUM_PORTS) #define PKT_BUF_SZ 1584 #define RSWITCH_ALIGN 128 @@ -49,6 +50,10 @@ #define AGENT_INDEX_GWCA 3 #define GWRO RSWITCH_GWCA0_OFFSET +#define GWCA_TS_IRQ_RESOURCE_NAME "gwca0_rxts0" +#define GWCA_TS_IRQ_NAME "rswitch: gwca0_rxts0" +#define GWCA_TS_IRQ_BIT BIT(0) + #define FWRO 0 #define TPRO RSWITCH_TOP_OFFSET #define CARO RSWITCH_COMA_OFFSET @@ -831,7 +836,7 @@ enum DIE_DT { DT_FSINGLE = 0x80, DT_FSTART = 0x90, DT_FMID = 0xa0, - DT_FEND = 0xb8, + DT_FEND = 0xb0, /* Chain control */ DT_LEMPTY = 0xc0, @@ -843,7 +848,7 @@ enum DIE_DT { DT_FEMPTY = 0x40, DT_FEMPTY_IS = 0x10, DT_FEMPTY_IC = 0x20, - DT_FEMPTY_ND = 0x38, + DT_FEMPTY_ND = 0x30, DT_FEMPTY_START = 0x50, DT_FEMPTY_MID = 0x60, DT_FEMPTY_END = 0x70, @@ -865,6 +870,12 @@ enum DIE_DT { /* For reception */ #define INFO1_SPN(port) ((u64)(port) << 36ULL) +/* For timestamp descriptor in dptrl (Byte 4 to 7) */ +#define TS_DESC_TSUN(dptrl) ((dptrl) & GENMASK(7, 0)) +#define TS_DESC_SPN(dptrl) (((dptrl) & GENMASK(10, 8)) >> 8) +#define TS_DESC_DPN(dptrl) (((dptrl) & GENMASK(17, 16)) >> 16) +#define TS_DESC_TN(dptrl) ((dptrl) & BIT(24)) + struct rswitch_desc { __le16 info_ds; /* Descriptor size */ u8 die_dt; /* Descriptor interrupt enable and type */ @@ -911,27 +922,43 @@ struct rswitch_etha { * name, this driver calls "queue". */ struct rswitch_gwca_queue { - int index; - bool dir_tx; - bool gptp; union { - struct rswitch_ext_desc *ring; - struct rswitch_ext_ts_desc *ts_ring; + struct rswitch_ext_desc *tx_ring; + struct rswitch_ext_ts_desc *rx_ring; + struct rswitch_ts_desc *ts_ring; }; + + /* Common */ dma_addr_t ring_dma; int ring_size; int cur; int dirty; - struct sk_buff **skbs; + /* For [rt]_ring */ + int index; + bool dir_tx; + struct sk_buff **skbs; struct net_device *ndev; /* queue to ndev for irq */ }; +struct rswitch_gwca_ts_info { + struct sk_buff *skb; + struct list_head list; + + int port; + u8 tag; +}; + #define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32)) struct rswitch_gwca { int index; + struct rswitch_desc *linkfix_table; + dma_addr_t linkfix_table_dma; + u32 linkfix_table_size; struct rswitch_gwca_queue *queues; int num_queues; + struct rswitch_gwca_queue ts_queue; + struct list_head ts_info_list; DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES); u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS]; u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS]; @@ -969,9 +996,6 @@ struct rswitch_private { struct platform_device *pdev; void __iomem *addr; struct rcar_gen4_ptp_private *ptp_priv; - struct rswitch_desc *linkfix_table; - dma_addr_t linkfix_table_dma; - u32 linkfix_table_size; struct rswitch_device *rdev[RSWITCH_NUM_PORTS]; diff --git a/drivers/net/ethernet/sfc/efx.c b/drivers/net/ethernet/sfc/efx.c index 3a86f1213a05..02c2adeb0a12 100644 --- a/drivers/net/ethernet/sfc/efx.c +++ b/drivers/net/ethernet/sfc/efx.c @@ -1028,6 +1028,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx) net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; net_dev->features |= efx->fixed_features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + rc = efx_register_netdev(efx); if (!rc) return 0; diff --git a/drivers/net/ethernet/sfc/siena/efx.c b/drivers/net/ethernet/sfc/siena/efx.c index 60e5b7c8ccf9..ef52ec71d197 100644 --- a/drivers/net/ethernet/sfc/siena/efx.c +++ b/drivers/net/ethernet/sfc/siena/efx.c @@ -1007,6 +1007,10 @@ static int efx_pci_probe_post_io(struct efx_nic *efx) net_dev->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER; net_dev->features |= efx->fixed_features; + net_dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + rc = efx_register_netdev(efx); if (!rc) return 0; diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c index 9b46579b5a10..2d7347b71c41 100644 --- a/drivers/net/ethernet/socionext/netsec.c +++ b/drivers/net/ethernet/socionext/netsec.c @@ -2104,6 +2104,9 @@ static int netsec_probe(struct platform_device *pdev) NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM; ndev->hw_features = ndev->features; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + priv->rx_cksum_offload_flag = true; ret = netsec_register_mdio(priv, phy_addr); diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c index f44e4e4b4f16..868f59ec8439 100644 --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c @@ -7153,6 +7153,8 @@ int stmmac_dvr_probe(struct device *device, ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_RXCSUM; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; ret = stmmac_tc_init(priv, priv); if (!ret) { diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index 13c9c2d6b79b..37f0b62ec5d6 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -1458,6 +1458,8 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv) priv_sl2->emac_port = 1; cpsw->slaves[1].ndev = ndev; ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; @@ -1635,6 +1637,8 @@ static int cpsw_probe(struct platform_device *pdev) cpsw->slaves[0].ndev = ndev; ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c index 83596ec0c7cb..35128dd45ffc 100644 --- a/drivers/net/ethernet/ti/cpsw_new.c +++ b/drivers/net/ethernet/ti/cpsw_new.c @@ -1405,6 +1405,10 @@ static int cpsw_create_ports(struct cpsw_common *cpsw) ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_NETNS_LOCAL | NETIF_F_HW_TC; + ndev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + ndev->netdev_ops = &cpsw_netdev_ops; ndev->ethtool_ops = &cpsw_ethtool_ops; SET_NETDEV_DEV(ndev, dev); diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c index 2ee286b2b177..eb89a274083e 100644 --- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c +++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c @@ -1745,8 +1745,8 @@ static int wx_alloc_page_pool(struct wx_ring *rx_ring) rx_ring->page_pool = page_pool_create(&pp_params); if (IS_ERR(rx_ring->page_pool)) { - rx_ring->page_pool = NULL; ret = PTR_ERR(rx_ring->page_pool); + rx_ring->page_pool = NULL; } return ret; diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index e02d1e3ef672..79f4e13620a4 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -1034,7 +1034,7 @@ static int netvsc_dma_map(struct hv_device *hv_dev, packet->dma_range = kcalloc(page_count, sizeof(*packet->dma_range), - GFP_KERNEL); + GFP_ATOMIC); if (!packet->dma_range) return -ENOMEM; diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index f9b219e6cd58..a9b139bbdb2c 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -2559,6 +2559,8 @@ static int netvsc_probe(struct hv_device *dev, netdev_lockdep_set_classes(net); + net->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; + /* MTU range: 68 - 1500 or 65521 */ net->min_mtu = NETVSC_MTU_MIN; if (nvdev->nvsp_version >= NVSP_PROTOCOL_VERSION_2) diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c index bea2da1c4c51..2cb1710f6ac3 100644 --- a/drivers/net/ipa/gsi.c +++ b/drivers/net/ipa/gsi.c @@ -182,6 +182,41 @@ static bool gsi_channel_initialized(struct gsi_channel *channel) return !!channel->gsi; } +/* Encode the channel protocol for the CH_C_CNTXT_0 register */ +static u32 ch_c_cntxt_0_type_encode(enum ipa_version version, + enum gsi_channel_type type) +{ + u32 val; + + val = u32_encode_bits(type, CHTYPE_PROTOCOL_FMASK); + if (version < IPA_VERSION_4_5) + return val; + + type >>= hweight32(CHTYPE_PROTOCOL_FMASK); + + return val | u32_encode_bits(type, CHTYPE_PROTOCOL_MSB_FMASK); +} + +/* Encode a channel ring buffer length for the CH_C_CNTXT_1 register */ +static u32 ch_c_cntxt_1_length_encode(enum ipa_version version, u32 length) +{ + if (version < IPA_VERSION_4_9) + return u32_encode_bits(length, GENMASK(15, 0)); + + return u32_encode_bits(length, GENMASK(19, 0)); +} + +/* Encode the length of the event channel ring buffer for the + * EV_CH_E_CNTXT_1 register. + */ +static u32 ev_ch_e_cntxt_1_length_encode(enum ipa_version version, u32 length) +{ + if (version < IPA_VERSION_4_9) + return u32_encode_bits(length, GENMASK(15, 0)); + + return u32_encode_bits(length, GENMASK(19, 0)); +} + /* Update the GSI IRQ type register with the cached value */ static void gsi_irq_type_update(struct gsi *gsi, u32 val) { @@ -191,12 +226,12 @@ static void gsi_irq_type_update(struct gsi *gsi, u32 val) static void gsi_irq_type_enable(struct gsi *gsi, enum gsi_irq_type_id type_id) { - gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(type_id)); + gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | type_id); } static void gsi_irq_type_disable(struct gsi *gsi, enum gsi_irq_type_id type_id) { - gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~BIT(type_id)); + gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~type_id); } /* Event ring commands are performed one at a time. Their completion @@ -292,19 +327,19 @@ static void gsi_irq_enable(struct gsi *gsi) /* Global interrupts include hardware error reports. Enable * that so we can at least report the error should it occur. */ - iowrite32(BIT(ERROR_INT), gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); - gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(GSI_GLOB_EE)); + iowrite32(ERROR_INT, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | GSI_GLOB_EE); /* General GSI interrupts are reported to all EEs; if they occur * they are unrecoverable (without reset). A breakpoint interrupt * also exists, but we don't support that. We want to be notified * of errors so we can report them, even if they can't be handled. */ - val = BIT(BUS_ERROR); - val |= BIT(CMD_FIFO_OVRFLOW); - val |= BIT(MCS_STACK_OVRFLOW); + val = BUS_ERROR; + val |= CMD_FIFO_OVRFLOW; + val |= MCS_STACK_OVRFLOW; iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET); - gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(GSI_GENERAL)); + gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | GSI_GENERAL); } /* Disable all GSI interrupt types */ @@ -676,7 +711,7 @@ static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id) iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id)); size = ring->count * GSI_RING_ELEMENT_SIZE; - val = ev_r_length_encoded(gsi->version, size); + val = ev_ch_e_cntxt_1_length_encode(gsi->version, size); iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id)); /* The context 2 and 3 registers store the low-order and @@ -765,14 +800,14 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell) u32 val; /* We program all channels as GPI type/protocol */ - val = chtype_protocol_encoded(gsi->version, GSI_CHANNEL_TYPE_GPI); + val = ch_c_cntxt_0_type_encode(gsi->version, GSI_CHANNEL_TYPE_GPI); if (channel->toward_ipa) val |= CHTYPE_DIR_FMASK; val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK); val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, ELEMENT_SIZE_FMASK); iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id)); - val = r_length_encoded(gsi->version, size); + val = ch_c_cntxt_1_length_encode(gsi->version, size); iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id)); /* The context 2 and 3 registers store the low-order and @@ -1195,15 +1230,15 @@ static void gsi_isr_glob_ee(struct gsi *gsi) val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET); - if (val & BIT(ERROR_INT)) + if (val & ERROR_INT) gsi_isr_glob_err(gsi); iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET); - val &= ~BIT(ERROR_INT); + val &= ~ERROR_INT; - if (val & BIT(GP_INT1)) { - val ^= BIT(GP_INT1); + if (val & GP_INT1) { + val ^= GP_INT1; gsi_isr_gp_int1(gsi); } @@ -1264,19 +1299,19 @@ static irqreturn_t gsi_isr(int irq, void *dev_id) intr_mask ^= gsi_intr; switch (gsi_intr) { - case BIT(GSI_CH_CTRL): + case GSI_CH_CTRL: gsi_isr_chan_ctrl(gsi); break; - case BIT(GSI_EV_CTRL): + case GSI_EV_CTRL: gsi_isr_evt_ctrl(gsi); break; - case BIT(GSI_GLOB_EE): + case GSI_GLOB_EE: gsi_isr_glob_ee(gsi); break; - case BIT(GSI_IEOB): + case GSI_IEOB: gsi_isr_ieob(gsi); break; - case BIT(GSI_GENERAL): + case GSI_GENERAL: gsi_isr_general(gsi); break; default: @@ -1654,7 +1689,7 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id, * channel), and only from this function. So we enable the GP_INT1 * IRQ type here, and disable it again after the command completes. */ - val = BIT(ERROR_INT) | BIT(GP_INT1); + val = ERROR_INT | GP_INT1; iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); /* First zero the result code field */ @@ -1666,12 +1701,13 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id, val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK); val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK); val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK); - val |= u32_encode_bits(params, GENERIC_PARAMS_FMASK); + if (gsi->version >= IPA_VERSION_4_11) + val |= u32_encode_bits(params, GENERIC_PARAMS_FMASK); timeout = !gsi_command(gsi, GSI_GENERIC_CMD_OFFSET, val); /* Disable the GP_INT1 IRQ type again */ - iowrite32(BIT(ERROR_INT), gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); + iowrite32(ERROR_INT, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET); if (!timeout) return gsi->result; diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h index 3763359f208f..d171f65d4198 100644 --- a/drivers/net/ipa/gsi_reg.h +++ b/drivers/net/ipa/gsi_reg.h @@ -62,6 +62,18 @@ /* All other register offsets are relative to gsi->virt */ +#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ + (0x0001c000 + 0x4000 * GSI_EE_AP + 0x80 * (ch)) +#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) +#define CHTYPE_DIR_FMASK GENMASK(3, 3) +#define EE_FMASK GENMASK(7, 4) +#define CHID_FMASK GENMASK(12, 8) +/* The next field is present for IPA v4.5 and above */ +#define CHTYPE_PROTOCOL_MSB_FMASK GENMASK(13, 13) +#define ERINDEX_FMASK GENMASK(18, 14) +#define CHSTATE_FMASK GENMASK(23, 20) +#define ELEMENT_SIZE_FMASK GENMASK(31, 24) + /** enum gsi_channel_type - CHTYPE_PROTOCOL field values in CH_C_CNTXT_0 */ enum gsi_channel_type { GSI_CHANNEL_TYPE_MHI = 0x0, @@ -76,46 +88,9 @@ enum gsi_channel_type { GSI_CHANNEL_TYPE_11AD = 0x9, }; -#define GSI_CH_C_CNTXT_0_OFFSET(ch) \ - (0x0001c000 + 0x4000 * GSI_EE_AP + 0x80 * (ch)) -#define CHTYPE_PROTOCOL_FMASK GENMASK(2, 0) -#define CHTYPE_DIR_FMASK GENMASK(3, 3) -#define EE_FMASK GENMASK(7, 4) -#define CHID_FMASK GENMASK(12, 8) -/* The next field is present for IPA v4.5 and above */ -#define CHTYPE_PROTOCOL_MSB_FMASK GENMASK(13, 13) -#define ERINDEX_FMASK GENMASK(18, 14) -#define CHSTATE_FMASK GENMASK(23, 20) -#define ELEMENT_SIZE_FMASK GENMASK(31, 24) - -/* Encoded value for CH_C_CNTXT_0 register channel protocol fields */ -static inline u32 -chtype_protocol_encoded(enum ipa_version version, enum gsi_channel_type type) -{ - u32 val; - - val = u32_encode_bits(type, CHTYPE_PROTOCOL_FMASK); - if (version < IPA_VERSION_4_5) - return val; - - /* Encode upper bit(s) as well */ - type >>= hweight32(CHTYPE_PROTOCOL_FMASK); - val |= u32_encode_bits(type, CHTYPE_PROTOCOL_MSB_FMASK); - - return val; -} - #define GSI_CH_C_CNTXT_1_OFFSET(ch) \ (0x0001c004 + 0x4000 * GSI_EE_AP + 0x80 * (ch)) -/* Encoded value for CH_C_CNTXT_1 register R_LENGTH field */ -static inline u32 r_length_encoded(enum ipa_version version, u32 length) -{ - if (version < IPA_VERSION_4_9) - return u32_encode_bits(length, GENMASK(15, 0)); - return u32_encode_bits(length, GENMASK(19, 0)); -} - #define GSI_CH_C_CNTXT_2_OFFSET(ch) \ (0x0001c008 + 0x4000 * GSI_EE_AP + 0x80 * (ch)) @@ -167,13 +142,6 @@ enum gsi_prefetch_mode { #define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \ (0x0001d004 + 0x4000 * GSI_EE_AP + 0x80 * (ev)) -/* Encoded value for EV_CH_C_CNTXT_1 register EV_R_LENGTH field */ -static inline u32 ev_r_length_encoded(enum ipa_version version, u32 length) -{ - if (version < IPA_VERSION_4_9) - return u32_encode_bits(length, GENMASK(15, 0)); - return u32_encode_bits(length, GENMASK(19, 0)); -} #define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \ (0x0001d008 + 0x4000 * GSI_EE_AP + 0x80 * (ev)) @@ -299,15 +267,25 @@ enum gsi_iram_size { #define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \ (0x0001f088 + 0x4000 * GSI_EE_AP) -/* Values here are bit positions in the TYPE_IRQ and TYPE_IRQ_MSK registers */ +/** + * enum gsi_irq_type_id: GSI IRQ types + * @GSI_CH_CTRL: Channel allocation, deallocation, etc. + * @GSI_EV_CTRL: Event ring allocation, deallocation, etc. + * @GSI_GLOB_EE: Global/general event + * @GSI_IEOB: Transfer (TRE) completion + * @GSI_INTER_EE_CH_CTRL: Remote-issued stop/reset (unused) + * @GSI_INTER_EE_EV_CTRL: Remote-issued event reset (unused) + * @GSI_GENERAL: General hardware event (bus error, etc.) + */ enum gsi_irq_type_id { - GSI_CH_CTRL = 0x0, /* channel allocation, etc. */ - GSI_EV_CTRL = 0x1, /* event ring allocation, etc. */ - GSI_GLOB_EE = 0x2, /* global/general event */ - GSI_IEOB = 0x3, /* TRE completion */ - GSI_INTER_EE_CH_CTRL = 0x4, /* remote-issued stop/reset (unused) */ - GSI_INTER_EE_EV_CTRL = 0x5, /* remote-issued event reset (unused) */ - GSI_GENERAL = 0x6, /* general-purpose event */ + GSI_CH_CTRL = BIT(0), + GSI_EV_CTRL = BIT(1), + GSI_GLOB_EE = BIT(2), + GSI_IEOB = BIT(3), + GSI_INTER_EE_CH_CTRL = BIT(4), + GSI_INTER_EE_EV_CTRL = BIT(5), + GSI_GENERAL = BIT(6), + /* IRQ types 7-31 (and their bit values) are reserved */ }; #define GSI_CNTXT_SRC_CH_IRQ_OFFSET \ @@ -343,12 +321,14 @@ enum gsi_irq_type_id { (0x0001f108 + 0x4000 * GSI_EE_AP) #define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \ (0x0001f110 + 0x4000 * GSI_EE_AP) -/* Values here are bit positions in the GLOB_IRQ_* registers */ + +/** enum gsi_global_irq_id: Global GSI interrupt events */ enum gsi_global_irq_id { - ERROR_INT = 0x0, - GP_INT1 = 0x1, - GP_INT2 = 0x2, - GP_INT3 = 0x3, + ERROR_INT = BIT(0), + GP_INT1 = BIT(1), + GP_INT2 = BIT(2), + GP_INT3 = BIT(3), + /* Global IRQ types 4-31 (and their bit values) are reserved */ }; #define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \ @@ -357,12 +337,14 @@ enum gsi_global_irq_id { (0x0001f120 + 0x4000 * GSI_EE_AP) #define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \ (0x0001f128 + 0x4000 * GSI_EE_AP) -/* Values here are bit positions in the (general) GSI_IRQ_* registers */ -enum gsi_general_id { - BREAK_POINT = 0x0, - BUS_ERROR = 0x1, - CMD_FIFO_OVRFLOW = 0x2, - MCS_STACK_OVRFLOW = 0x3, + +/** enum gsi_general_irq_id: GSI general IRQ conditions */ +enum gsi_general_irq_id { + BREAK_POINT = BIT(0), + BUS_ERROR = BIT(1), + CMD_FIFO_OVRFLOW = BIT(2), + MCS_STACK_OVRFLOW = BIT(3), + /* General IRQ types 4-31 (and their bit values) are reserved */ }; #define GSI_CNTXT_INTSET_OFFSET \ @@ -372,7 +354,6 @@ enum gsi_general_id { #define GSI_ERROR_LOG_OFFSET \ (0x0001f200 + 0x4000 * GSI_EE_AP) -/* Fields below are present for IPA v3.5.1 and above */ #define ERR_ARG3_FMASK GENMASK(3, 0) #define ERR_ARG2_FMASK GENMASK(7, 4) #define ERR_ARG1_FMASK GENMASK(11, 8) diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h index 5372db58b5bd..f3355e040a9e 100644 --- a/drivers/net/ipa/ipa.h +++ b/drivers/net/ipa/ipa.h @@ -45,7 +45,6 @@ struct ipa_interrupt; * @interrupt: IPA Interrupt information * @uc_powered: true if power is active by proxy for microcontroller * @uc_loaded: true after microcontroller has reported it's ready - * @reg_addr: DMA address used for IPA register access * @reg_virt: Virtual address used for IPA register access * @regs: IPA register definitions * @mem_addr: DMA address of IPA-local memory space @@ -97,9 +96,8 @@ struct ipa { bool uc_powered; bool uc_loaded; - dma_addr_t reg_addr; void __iomem *reg_virt; - const struct ipa_regs *regs; + const struct regs *regs; dma_addr_t mem_addr; void *mem_virt; diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c index 16169641ddeb..f1419fbd776c 100644 --- a/drivers/net/ipa/ipa_cmd.c +++ b/drivers/net/ipa/ipa_cmd.c @@ -287,7 +287,7 @@ static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa, /* Check whether offsets passed to register_write are valid */ static bool ipa_cmd_register_write_valid(struct ipa *ipa) { - const struct ipa_reg *reg; + const struct reg *reg; const char *name; u32 offset; @@ -300,7 +300,7 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa) else reg = ipa_reg(ipa, FILT_ROUT_CACHE_FLUSH); - offset = ipa_reg_offset(reg); + offset = reg_offset(reg); name = "filter/route hash flush"; if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) return false; @@ -314,7 +314,7 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa) * fits in the register write command field(s) that must hold it. */ reg = ipa_reg(ipa, ENDP_STATUS); - offset = ipa_reg_n_offset(reg, IPA_ENDPOINT_COUNT - 1); + offset = reg_n_offset(reg, IPA_ENDPOINT_COUNT - 1); name = "maximal endpoint status"; if (!ipa_cmd_register_write_offset_valid(ipa, name, offset)) return false; diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index 798dfa4484d5..2ee80ed140b7 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -241,7 +241,7 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, if (!data->toward_ipa) { const struct ipa_endpoint_rx *rx_config; - const struct ipa_reg *reg; + const struct reg *reg; u32 buffer_size; u32 aggr_size; u32 limit; @@ -304,7 +304,7 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, rx_config->aggr_hard_limit); reg = ipa_reg(ipa, ENDP_INIT_AGGR); - limit = ipa_reg_field_max(reg, BYTE_LIMIT); + limit = reg_field_max(reg, BYTE_LIMIT); if (aggr_size > limit) { dev_err(dev, "aggregated size too large for RX endpoint %u (%u KB > %u KB)\n", data->endpoint_id, aggr_size, limit); @@ -447,7 +447,7 @@ static bool ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay) { struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 field_id; u32 offset; bool state; @@ -460,11 +460,11 @@ ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay) WARN_ON(ipa->version >= IPA_VERSION_4_0); reg = ipa_reg(ipa, ENDP_INIT_CTRL); - offset = ipa_reg_n_offset(reg, endpoint->endpoint_id); + offset = reg_n_offset(reg, endpoint->endpoint_id); val = ioread32(ipa->reg_virt + offset); field_id = endpoint->toward_ipa ? ENDP_DELAY : ENDP_SUSPEND; - mask = ipa_reg_bit(reg, field_id); + mask = reg_bit(reg, field_id); state = !!(val & mask); @@ -493,13 +493,13 @@ static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint) u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; u32 unit = endpoint_id / 32; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; WARN_ON(!test_bit(endpoint_id, ipa->available)); reg = ipa_reg(ipa, STATE_AGGR_ACTIVE); - val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit)); + val = ioread32(ipa->reg_virt + reg_n_offset(reg, unit)); return !!(val & BIT(endpoint_id % 32)); } @@ -510,12 +510,12 @@ static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint) u32 mask = BIT(endpoint_id % 32); struct ipa *ipa = endpoint->ipa; u32 unit = endpoint_id / 32; - const struct ipa_reg *reg; + const struct reg *reg; WARN_ON(!test_bit(endpoint_id, ipa->available)); reg = ipa_reg(ipa, AGGR_FORCE_CLOSE); - iowrite32(mask, ipa->reg_virt + ipa_reg_n_offset(reg, unit)); + iowrite32(mask, ipa->reg_virt + reg_n_offset(reg, unit)); } /** @@ -613,7 +613,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) { struct ipa_endpoint *endpoint; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; /* We only reset modem TX endpoints */ @@ -622,7 +622,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) continue; reg = ipa_reg(ipa, ENDP_STATUS); - offset = ipa_reg_n_offset(reg, endpoint_id); + offset = reg_n_offset(reg, endpoint_id); /* Value written is 0, and all bits are updated. That * means status is disabled on the endpoint, and as a @@ -645,7 +645,7 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; enum ipa_cs_offload_en enabled; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; reg = ipa_reg(ipa, ENDP_INIT_CFG); @@ -658,7 +658,7 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) /* Checksum header offset is in 4-byte units */ off = sizeof(struct rmnet_map_header) / sizeof(u32); - val |= ipa_reg_encode(reg, CS_METADATA_HDR_OFFSET, off); + val |= reg_encode(reg, CS_METADATA_HDR_OFFSET, off); enabled = version < IPA_VERSION_4_5 ? IPA_CS_OFFLOAD_UL @@ -671,26 +671,26 @@ static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint) } else { enabled = IPA_CS_OFFLOAD_NONE; } - val |= ipa_reg_encode(reg, CS_OFFLOAD_EN, enabled); + val |= reg_encode(reg, CS_OFFLOAD_EN, enabled); /* CS_GEN_QMB_MASTER_SEL is 0 */ - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void ipa_endpoint_init_nat(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; if (!endpoint->toward_ipa) return; reg = ipa_reg(ipa, ENDP_INIT_NAT); - val = ipa_reg_encode(reg, NAT_EN, IPA_NAT_TYPE_BYPASS); + val = reg_encode(reg, NAT_EN, IPA_NAT_TYPE_BYPASS); - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static u32 @@ -716,13 +716,13 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint) /* Encoded value for ENDP_INIT_HDR register HDR_LEN* field(s) */ static u32 ipa_header_size_encode(enum ipa_version version, - const struct ipa_reg *reg, u32 header_size) + const struct reg *reg, u32 header_size) { - u32 field_max = ipa_reg_field_max(reg, HDR_LEN); + u32 field_max = reg_field_max(reg, HDR_LEN); u32 val; /* We know field_max can be used as a mask (2^n - 1) */ - val = ipa_reg_encode(reg, HDR_LEN, header_size & field_max); + val = reg_encode(reg, HDR_LEN, header_size & field_max); if (version < IPA_VERSION_4_5) { WARN_ON(header_size > field_max); return val; @@ -730,21 +730,21 @@ static u32 ipa_header_size_encode(enum ipa_version version, /* IPA v4.5 adds a few more most-significant bits */ header_size >>= hweight32(field_max); - WARN_ON(header_size > ipa_reg_field_max(reg, HDR_LEN_MSB)); - val |= ipa_reg_encode(reg, HDR_LEN_MSB, header_size); + WARN_ON(header_size > reg_field_max(reg, HDR_LEN_MSB)); + val |= reg_encode(reg, HDR_LEN_MSB, header_size); return val; } /* Encoded value for ENDP_INIT_HDR register OFST_METADATA* field(s) */ static u32 ipa_metadata_offset_encode(enum ipa_version version, - const struct ipa_reg *reg, u32 offset) + const struct reg *reg, u32 offset) { - u32 field_max = ipa_reg_field_max(reg, HDR_OFST_METADATA); + u32 field_max = reg_field_max(reg, HDR_OFST_METADATA); u32 val; /* We know field_max can be used as a mask (2^n - 1) */ - val = ipa_reg_encode(reg, HDR_OFST_METADATA, offset); + val = reg_encode(reg, HDR_OFST_METADATA, offset); if (version < IPA_VERSION_4_5) { WARN_ON(offset > field_max); return val; @@ -752,8 +752,8 @@ static u32 ipa_metadata_offset_encode(enum ipa_version version, /* IPA v4.5 adds a few more most-significant bits */ offset >>= hweight32(field_max); - WARN_ON(offset > ipa_reg_field_max(reg, HDR_OFST_METADATA_MSB)); - val |= ipa_reg_encode(reg, HDR_OFST_METADATA_MSB, offset); + WARN_ON(offset > reg_field_max(reg, HDR_OFST_METADATA_MSB)); + val |= reg_encode(reg, HDR_OFST_METADATA_MSB, offset); return val; } @@ -783,7 +783,7 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; reg = ipa_reg(ipa, ENDP_INIT_HDR); @@ -806,13 +806,13 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) off = offsetof(struct rmnet_map_header, pkt_len); /* Upper bits are stored in HDR_EXT with IPA v4.5 */ if (version >= IPA_VERSION_4_5) - off &= ipa_reg_field_max(reg, HDR_OFST_PKT_SIZE); + off &= reg_field_max(reg, HDR_OFST_PKT_SIZE); - val |= ipa_reg_bit(reg, HDR_OFST_PKT_SIZE_VALID); - val |= ipa_reg_encode(reg, HDR_OFST_PKT_SIZE, off); + val |= reg_bit(reg, HDR_OFST_PKT_SIZE_VALID); + val |= reg_encode(reg, HDR_OFST_PKT_SIZE, off); } /* For QMAP TX, metadata offset is 0 (modem assumes this) */ - val |= ipa_reg_bit(reg, HDR_OFST_METADATA_VALID); + val |= reg_bit(reg, HDR_OFST_METADATA_VALID); /* HDR_ADDITIONAL_CONST_LEN is 0; (RX only) */ /* HDR_A5_MUX is 0 */ @@ -820,7 +820,7 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint) /* HDR_METADATA_REG_VALID is 0 (TX only, version < v4.5) */ } - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) @@ -828,13 +828,13 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) u32 pad_align = endpoint->config.rx.pad_align; u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; reg = ipa_reg(ipa, ENDP_INIT_HDR_EXT); if (endpoint->config.qmap) { /* We have a header, so we must specify its endianness */ - val |= ipa_reg_bit(reg, HDR_ENDIANNESS); /* big endian */ + val |= reg_bit(reg, HDR_ENDIANNESS); /* big endian */ /* A QMAP header contains a 6 bit pad field at offset 0. * The RMNet driver assumes this field is meaningful in @@ -844,16 +844,16 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) * (although 0) should be ignored. */ if (!endpoint->toward_ipa) { - val |= ipa_reg_bit(reg, HDR_TOTAL_LEN_OR_PAD_VALID); + val |= reg_bit(reg, HDR_TOTAL_LEN_OR_PAD_VALID); /* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */ - val |= ipa_reg_bit(reg, HDR_PAYLOAD_LEN_INC_PADDING); + val |= reg_bit(reg, HDR_PAYLOAD_LEN_INC_PADDING); /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */ } } /* HDR_PAYLOAD_LEN_INC_PADDING is 0 */ if (!endpoint->toward_ipa) - val |= ipa_reg_encode(reg, HDR_PAD_TO_ALIGNMENT, pad_align); + val |= reg_encode(reg, HDR_PAD_TO_ALIGNMENT, pad_align); /* IPA v4.5 adds some most-significant bits to a few fields, * two of which are defined in the HDR (not HDR_EXT) register. @@ -861,25 +861,25 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint) if (ipa->version >= IPA_VERSION_4_5) { /* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0, so MSB is 0 */ if (endpoint->config.qmap && !endpoint->toward_ipa) { - u32 mask = ipa_reg_field_max(reg, HDR_OFST_PKT_SIZE); + u32 mask = reg_field_max(reg, HDR_OFST_PKT_SIZE); u32 off; /* Field offset within header */ off = offsetof(struct rmnet_map_header, pkt_len); /* Low bits are in the ENDP_INIT_HDR register */ off >>= hweight32(mask); - val |= ipa_reg_encode(reg, HDR_OFST_PKT_SIZE_MSB, off); + val |= reg_encode(reg, HDR_OFST_PKT_SIZE_MSB, off); /* HDR_ADDITIONAL_CONST_LEN is 0 so MSB is 0 */ } } - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; u32 offset; @@ -887,7 +887,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) return; /* Register not valid for TX endpoints */ reg = ipa_reg(ipa, ENDP_INIT_HDR_METADATA_MASK); - offset = ipa_reg_n_offset(reg, endpoint_id); + offset = reg_n_offset(reg, endpoint_id); /* Note that HDR_ENDIANNESS indicates big endian header fields */ if (endpoint->config.qmap) @@ -899,7 +899,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint) static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) { struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; @@ -911,14 +911,14 @@ static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint) enum ipa_endpoint_name name = endpoint->config.dma_endpoint; u32 dma_endpoint_id = ipa->name_map[name]->endpoint_id; - val = ipa_reg_encode(reg, ENDP_MODE, IPA_DMA); - val |= ipa_reg_encode(reg, DEST_PIPE_INDEX, dma_endpoint_id); + val = reg_encode(reg, ENDP_MODE, IPA_DMA); + val |= reg_encode(reg, DEST_PIPE_INDEX, dma_endpoint_id); } else { - val = ipa_reg_encode(reg, ENDP_MODE, IPA_BASIC); + val = reg_encode(reg, ENDP_MODE, IPA_BASIC); } /* All other bits unspecified (and 0) */ - offset = ipa_reg_n_offset(reg, endpoint->endpoint_id); + offset = reg_n_offset(reg, endpoint->endpoint_id); iowrite32(val, ipa->reg_virt + offset); } @@ -963,7 +963,7 @@ out: } /* Encode the aggregation timer limit (microseconds) based on IPA version */ -static u32 aggr_time_limit_encode(struct ipa *ipa, const struct ipa_reg *reg, +static u32 aggr_time_limit_encode(struct ipa *ipa, const struct reg *reg, u32 microseconds) { u32 ticks; @@ -972,14 +972,14 @@ static u32 aggr_time_limit_encode(struct ipa *ipa, const struct ipa_reg *reg, if (!microseconds) return 0; /* Nothing to compute if time limit is 0 */ - max = ipa_reg_field_max(reg, TIME_LIMIT); + max = reg_field_max(reg, TIME_LIMIT); if (ipa->version >= IPA_VERSION_4_5) { u32 select; ticks = ipa_qtime_val(ipa, microseconds, max, &select); - return ipa_reg_encode(reg, AGGR_GRAN_SEL, select) | - ipa_reg_encode(reg, TIME_LIMIT, ticks); + return reg_encode(reg, AGGR_GRAN_SEL, select) | + reg_encode(reg, TIME_LIMIT, ticks); } /* We program aggregation granularity in ipa_hardware_config() */ @@ -987,14 +987,14 @@ static u32 aggr_time_limit_encode(struct ipa *ipa, const struct ipa_reg *reg, WARN(ticks > max, "aggr_time_limit too large (%u > %u usec)\n", microseconds, max * IPA_AGGR_GRANULARITY); - return ipa_reg_encode(reg, TIME_LIMIT, ticks); + return reg_encode(reg, TIME_LIMIT, ticks); } static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; reg = ipa_reg(ipa, ENDP_INIT_AGGR); @@ -1005,13 +1005,13 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) u32 limit; rx_config = &endpoint->config.rx; - val |= ipa_reg_encode(reg, AGGR_EN, IPA_ENABLE_AGGR); - val |= ipa_reg_encode(reg, AGGR_TYPE, IPA_GENERIC); + val |= reg_encode(reg, AGGR_EN, IPA_ENABLE_AGGR); + val |= reg_encode(reg, AGGR_TYPE, IPA_GENERIC); buffer_size = rx_config->buffer_size; limit = ipa_aggr_size_kb(buffer_size - NET_SKB_PAD, rx_config->aggr_hard_limit); - val |= ipa_reg_encode(reg, BYTE_LIMIT, limit); + val |= reg_encode(reg, BYTE_LIMIT, limit); limit = rx_config->aggr_time_limit; val |= aggr_time_limit_encode(ipa, reg, limit); @@ -1019,20 +1019,20 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) /* AGGR_PKT_LIMIT is 0 (unlimited) */ if (rx_config->aggr_close_eof) - val |= ipa_reg_bit(reg, SW_EOF_ACTIVE); + val |= reg_bit(reg, SW_EOF_ACTIVE); } else { - val |= ipa_reg_encode(reg, AGGR_EN, IPA_ENABLE_DEAGGR); - val |= ipa_reg_encode(reg, AGGR_TYPE, IPA_QCMAP); + val |= reg_encode(reg, AGGR_EN, IPA_ENABLE_DEAGGR); + val |= reg_encode(reg, AGGR_TYPE, IPA_QCMAP); /* other fields ignored */ } /* AGGR_FORCE_CLOSE is 0 */ /* AGGR_GRAN_SEL is 0 for IPA v4.5 */ } else { - val |= ipa_reg_encode(reg, AGGR_EN, IPA_BYPASS_AGGR); + val |= reg_encode(reg, AGGR_EN, IPA_BYPASS_AGGR); /* other fields ignored */ } - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } /* The head-of-line blocking timer is defined as a tick count. For @@ -1043,7 +1043,7 @@ static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint) * Return the encoded value representing the timeout period provided * that should be written to the ENDP_INIT_HOL_BLOCK_TIMER register. */ -static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg, +static u32 hol_block_timer_encode(struct ipa *ipa, const struct reg *reg, u32 microseconds) { u32 width; @@ -1057,14 +1057,14 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg, return 0; /* Nothing to compute if timer period is 0 */ if (ipa->version >= IPA_VERSION_4_5) { - u32 max = ipa_reg_field_max(reg, TIMER_LIMIT); + u32 max = reg_field_max(reg, TIMER_LIMIT); u32 select; u32 ticks; ticks = ipa_qtime_val(ipa, microseconds, max, &select); - return ipa_reg_encode(reg, TIMER_GRAN_SEL, 1) | - ipa_reg_encode(reg, TIMER_LIMIT, ticks); + return reg_encode(reg, TIMER_GRAN_SEL, 1) | + reg_encode(reg, TIMER_LIMIT, ticks); } /* Use 64 bit arithmetic to avoid overflow */ @@ -1072,11 +1072,11 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg, ticks = DIV_ROUND_CLOSEST(microseconds * rate, 128 * USEC_PER_SEC); /* We still need the result to fit into the field */ - WARN_ON(ticks > ipa_reg_field_max(reg, TIMER_BASE_VALUE)); + WARN_ON(ticks > reg_field_max(reg, TIMER_BASE_VALUE)); /* IPA v3.5.1 through v4.1 just record the tick count */ if (ipa->version < IPA_VERSION_4_2) - return ipa_reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks); + return reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks); /* For IPA v4.2, the tick count is represented by base and * scale fields within the 32-bit timer register, where: @@ -1087,7 +1087,7 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg, * such that high bit is included. */ high = fls(ticks); /* 1..32 (or warning above) */ - width = hweight32(ipa_reg_fmask(reg, TIMER_BASE_VALUE)); + width = hweight32(reg_fmask(reg, TIMER_BASE_VALUE)); scale = high > width ? high - width : 0; if (scale) { /* If we're scaling, round up to get a closer result */ @@ -1097,8 +1097,8 @@ static u32 hol_block_timer_encode(struct ipa *ipa, const struct ipa_reg *reg, scale++; } - val = ipa_reg_encode(reg, TIMER_SCALE, scale); - val |= ipa_reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks >> scale); + val = reg_encode(reg, TIMER_SCALE, scale); + val |= reg_encode(reg, TIMER_BASE_VALUE, (u32)ticks >> scale); return val; } @@ -1109,14 +1109,14 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint, { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; /* This should only be changed when HOL_BLOCK_EN is disabled */ reg = ipa_reg(ipa, ENDP_INIT_HOL_BLOCK_TIMER); val = hol_block_timer_encode(ipa, reg, microseconds); - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void @@ -1124,13 +1124,13 @@ ipa_endpoint_init_hol_block_en(struct ipa_endpoint *endpoint, bool enable) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; reg = ipa_reg(ipa, ENDP_INIT_HOL_BLOCK_EN); - offset = ipa_reg_n_offset(reg, endpoint_id); - val = enable ? ipa_reg_bit(reg, HOL_BLOCK_EN) : 0; + offset = reg_n_offset(reg, endpoint_id); + val = enable ? reg_bit(reg, HOL_BLOCK_EN) : 0; iowrite32(val, ipa->reg_virt + offset); @@ -1171,7 +1171,7 @@ static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; if (!endpoint->toward_ipa) @@ -1183,7 +1183,7 @@ static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint) /* PACKET_OFFSET_LOCATION is ignored (not valid) */ /* MAX_PACKET_LEN is 0 (not enforced) */ - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void ipa_endpoint_init_rsrc_grp(struct ipa_endpoint *endpoint) @@ -1191,20 +1191,20 @@ static void ipa_endpoint_init_rsrc_grp(struct ipa_endpoint *endpoint) u32 resource_group = endpoint->config.resource_group; u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; reg = ipa_reg(ipa, ENDP_INIT_RSRC_GRP); - val = ipa_reg_encode(reg, ENDP_RSRC_GRP, resource_group); + val = reg_encode(reg, ENDP_RSRC_GRP, resource_group); - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; if (!endpoint->toward_ipa) @@ -1213,14 +1213,14 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint) reg = ipa_reg(ipa, ENDP_INIT_SEQ); /* Low-order byte configures primary packet processing */ - val = ipa_reg_encode(reg, SEQ_TYPE, endpoint->config.tx.seq_type); + val = reg_encode(reg, SEQ_TYPE, endpoint->config.tx.seq_type); /* Second byte (if supported) configures replicated packet processing */ if (ipa->version < IPA_VERSION_4_5) - val |= ipa_reg_encode(reg, SEQ_REP_TYPE, - endpoint->config.tx.seq_rep_type); + val |= reg_encode(reg, SEQ_REP_TYPE, + endpoint->config.tx.seq_rep_type); - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } /** @@ -1270,12 +1270,12 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 val = 0; reg = ipa_reg(ipa, ENDP_STATUS); if (endpoint->config.status_enable) { - val |= ipa_reg_bit(reg, STATUS_EN); + val |= reg_bit(reg, STATUS_EN); if (endpoint->toward_ipa) { enum ipa_endpoint_name name; u32 status_endpoint_id; @@ -1283,8 +1283,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint) name = endpoint->config.tx.status_endpoint; status_endpoint_id = ipa->name_map[name]->endpoint_id; - val |= ipa_reg_encode(reg, STATUS_ENDP, - status_endpoint_id); + val |= reg_encode(reg, STATUS_ENDP, status_endpoint_id); } /* STATUS_LOCATION is 0, meaning IPA packet status * precedes the packet (not present for IPA v4.5+) @@ -1292,7 +1291,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint) /* STATUS_PKT_SUPPRESS_FMASK is 0 (not present for v4.0+) */ } - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, endpoint_id)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, endpoint_id)); } static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint, @@ -1636,18 +1635,18 @@ void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint, void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id) { - const struct ipa_reg *reg; + const struct reg *reg; u32 val; reg = ipa_reg(ipa, ROUTE); /* ROUTE_DIS is 0 */ - val = ipa_reg_encode(reg, ROUTE_DEF_PIPE, endpoint_id); - val |= ipa_reg_bit(reg, ROUTE_DEF_HDR_TABLE); + val = reg_encode(reg, ROUTE_DEF_PIPE, endpoint_id); + val |= reg_bit(reg, ROUTE_DEF_HDR_TABLE); /* ROUTE_DEF_HDR_OFST is 0 */ - val |= ipa_reg_encode(reg, ROUTE_FRAG_DEF_PIPE, endpoint_id); - val |= ipa_reg_bit(reg, ROUTE_DEF_RETAIN_HDR); + val |= reg_encode(reg, ROUTE_FRAG_DEF_PIPE, endpoint_id); + val |= reg_bit(reg, ROUTE_DEF_RETAIN_HDR); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } void ipa_endpoint_default_route_clear(struct ipa *ipa) @@ -1985,7 +1984,7 @@ void ipa_endpoint_deconfig(struct ipa *ipa) int ipa_endpoint_config(struct ipa *ipa) { struct device *dev = &ipa->pdev->dev; - const struct ipa_reg *reg; + const struct reg *reg; u32 endpoint_id; u32 hw_limit; u32 tx_count; @@ -2019,12 +2018,12 @@ int ipa_endpoint_config(struct ipa *ipa) * the highest one doesn't exceed the number supported by software. */ reg = ipa_reg(ipa, FLAVOR_0); - val = ioread32(ipa->reg_virt + ipa_reg_offset(reg)); + val = ioread32(ipa->reg_virt + reg_offset(reg)); /* Our RX is an IPA producer; our TX is an IPA consumer. */ - tx_count = ipa_reg_decode(reg, MAX_CONS_PIPES, val); - rx_count = ipa_reg_decode(reg, MAX_PROD_PIPES, val); - rx_base = ipa_reg_decode(reg, PROD_LOWEST, val); + tx_count = reg_decode(reg, MAX_CONS_PIPES, val); + rx_count = reg_decode(reg, MAX_PROD_PIPES, val); + rx_base = reg_decode(reg, PROD_LOWEST, val); limit = rx_base + rx_count; if (limit > IPA_ENDPOINT_MAX) { diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c index 9a1153e80a3a..4bc05948f772 100644 --- a/drivers/net/ipa/ipa_interrupt.c +++ b/drivers/net/ipa/ipa_interrupt.c @@ -47,12 +47,12 @@ struct ipa_interrupt { static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id) { struct ipa *ipa = interrupt->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 mask = BIT(irq_id); u32 offset; reg = ipa_reg(ipa, IPA_IRQ_CLR); - offset = ipa_reg_offset(reg); + offset = reg_offset(reg); switch (irq_id) { case IPA_IRQ_UC_0: @@ -85,7 +85,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id) struct ipa_interrupt *interrupt = dev_id; struct ipa *ipa = interrupt->ipa; u32 enabled = interrupt->enabled; - const struct ipa_reg *reg; + const struct reg *reg; struct device *dev; u32 pending; u32 offset; @@ -102,7 +102,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id) * only the enabled ones. */ reg = ipa_reg(ipa, IPA_IRQ_STTS); - offset = ipa_reg_offset(reg); + offset = reg_offset(reg); pending = ioread32(ipa->reg_virt + offset); while ((mask = pending & enabled)) { do { @@ -120,8 +120,7 @@ static irqreturn_t ipa_isr_thread(int irq, void *dev_id) dev_dbg(dev, "clearing disabled IPA interrupts 0x%08x\n", pending); reg = ipa_reg(ipa, IPA_IRQ_CLR); - offset = ipa_reg_offset(reg); - iowrite32(pending, ipa->reg_virt + offset); + iowrite32(pending, ipa->reg_virt + reg_offset(reg)); } out_power_put: pm_runtime_mark_last_busy(dev); @@ -132,9 +131,9 @@ out_power_put: static void ipa_interrupt_enabled_update(struct ipa *ipa) { - const struct ipa_reg *reg = ipa_reg(ipa, IPA_IRQ_EN); + const struct reg *reg = ipa_reg(ipa, IPA_IRQ_EN); - iowrite32(ipa->interrupt->enabled, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(ipa->interrupt->enabled, ipa->reg_virt + reg_offset(reg)); } /* Enable an IPA interrupt type */ @@ -170,7 +169,7 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, struct ipa *ipa = interrupt->ipa; u32 mask = BIT(endpoint_id % 32); u32 unit = endpoint_id / 32; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; @@ -181,7 +180,7 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt, return; reg = ipa_reg(ipa, IRQ_SUSPEND_EN); - offset = ipa_reg_n_offset(reg, unit); + offset = reg_n_offset(reg, unit); val = ioread32(ipa->reg_virt + offset); if (enable) @@ -215,18 +214,18 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt) unit_count = roundup(ipa->endpoint_count, 32); for (unit = 0; unit < unit_count; unit++) { - const struct ipa_reg *reg; + const struct reg *reg; u32 val; reg = ipa_reg(ipa, IRQ_SUSPEND_INFO); - val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit)); + val = ioread32(ipa->reg_virt + reg_n_offset(reg, unit)); /* SUSPEND interrupt status isn't cleared on IPA version 3.0 */ if (ipa->version == IPA_VERSION_3_0) continue; reg = ipa_reg(ipa, IRQ_SUSPEND_CLR); - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, unit)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, unit)); } } @@ -241,7 +240,7 @@ struct ipa_interrupt *ipa_interrupt_config(struct ipa *ipa) { struct device *dev = &ipa->pdev->dev; struct ipa_interrupt *interrupt; - const struct ipa_reg *reg; + const struct reg *reg; unsigned int irq; int ret; @@ -261,7 +260,7 @@ struct ipa_interrupt *ipa_interrupt_config(struct ipa *ipa) /* Start with all IPA interrupts disabled */ reg = ipa_reg(ipa, IPA_IRQ_EN); - iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(0, ipa->reg_virt + reg_offset(reg)); ret = request_threaded_irq(irq, NULL, ipa_isr_thread, IRQF_ONESHOT, "ipa", interrupt); diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c index 60d7c558163f..6cb7bf96a626 100644 --- a/drivers/net/ipa/ipa_main.c +++ b/drivers/net/ipa/ipa_main.c @@ -203,7 +203,7 @@ static void ipa_teardown(struct ipa *ipa) static void ipa_hardware_config_bcr(struct ipa *ipa, const struct ipa_data *data) { - const struct ipa_reg *reg; + const struct reg *reg; u32 val; /* IPA v4.5+ has no backward compatibility register */ @@ -212,13 +212,13 @@ ipa_hardware_config_bcr(struct ipa *ipa, const struct ipa_data *data) reg = ipa_reg(ipa, IPA_BCR); val = data->backward_compat; - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } static void ipa_hardware_config_tx(struct ipa *ipa) { enum ipa_version version = ipa->version; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; @@ -227,11 +227,11 @@ static void ipa_hardware_config_tx(struct ipa *ipa) /* Disable PA mask to allow HOLB drop */ reg = ipa_reg(ipa, IPA_TX_CFG); - offset = ipa_reg_offset(reg); + offset = reg_offset(reg); val = ioread32(ipa->reg_virt + offset); - val &= ~ipa_reg_bit(reg, PA_MASK_EN); + val &= ~reg_bit(reg, PA_MASK_EN); iowrite32(val, ipa->reg_virt + offset); } @@ -239,7 +239,7 @@ static void ipa_hardware_config_tx(struct ipa *ipa) static void ipa_hardware_config_clkon(struct ipa *ipa) { enum ipa_version version = ipa->version; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; if (version >= IPA_VERSION_4_5) @@ -252,20 +252,20 @@ static void ipa_hardware_config_clkon(struct ipa *ipa) reg = ipa_reg(ipa, CLKON_CFG); if (version == IPA_VERSION_3_1) { /* Disable MISC clock gating */ - val = ipa_reg_bit(reg, CLKON_MISC); + val = reg_bit(reg, CLKON_MISC); } else { /* IPA v4.0+ */ /* Enable open global clocks in the CLKON configuration */ - val = ipa_reg_bit(reg, CLKON_GLOBAL); - val |= ipa_reg_bit(reg, GLOBAL_2X_CLK); + val = reg_bit(reg, CLKON_GLOBAL); + val |= reg_bit(reg, GLOBAL_2X_CLK); } - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } /* Configure bus access behavior for IPA components */ static void ipa_hardware_config_comp(struct ipa *ipa) { - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; @@ -274,21 +274,22 @@ static void ipa_hardware_config_comp(struct ipa *ipa) return; reg = ipa_reg(ipa, COMP_CFG); - offset = ipa_reg_offset(reg); + offset = reg_offset(reg); + val = ioread32(ipa->reg_virt + offset); if (ipa->version == IPA_VERSION_4_0) { - val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_CONS_EN); - val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_PROD_EN); - val &= ~ipa_reg_bit(reg, IPA_QMB_SELECT_GLOBAL_EN); + val &= ~reg_bit(reg, IPA_QMB_SELECT_CONS_EN); + val &= ~reg_bit(reg, IPA_QMB_SELECT_PROD_EN); + val &= ~reg_bit(reg, IPA_QMB_SELECT_GLOBAL_EN); } else if (ipa->version < IPA_VERSION_4_5) { - val |= ipa_reg_bit(reg, GSI_MULTI_AXI_MASTERS_DIS); + val |= reg_bit(reg, GSI_MULTI_AXI_MASTERS_DIS); } else { /* For IPA v4.5 FULL_FLUSH_WAIT_RS_CLOSURE_EN is 0 */ } - val |= ipa_reg_bit(reg, GSI_MULTI_INORDER_RD_DIS); - val |= ipa_reg_bit(reg, GSI_MULTI_INORDER_WR_DIS); + val |= reg_bit(reg, GSI_MULTI_INORDER_RD_DIS); + val |= reg_bit(reg, GSI_MULTI_INORDER_WR_DIS); iowrite32(val, ipa->reg_virt + offset); } @@ -299,7 +300,7 @@ ipa_hardware_config_qsb(struct ipa *ipa, const struct ipa_data *data) { const struct ipa_qsb_data *data0; const struct ipa_qsb_data *data1; - const struct ipa_reg *reg; + const struct reg *reg; u32 val; /* QMB 0 represents DDR; QMB 1 (if present) represents PCIe */ @@ -310,29 +311,27 @@ ipa_hardware_config_qsb(struct ipa *ipa, const struct ipa_data *data) /* Max outstanding write accesses for QSB masters */ reg = ipa_reg(ipa, QSB_MAX_WRITES); - val = ipa_reg_encode(reg, GEN_QMB_0_MAX_WRITES, data0->max_writes); + val = reg_encode(reg, GEN_QMB_0_MAX_WRITES, data0->max_writes); if (data->qsb_count > 1) - val |= ipa_reg_encode(reg, GEN_QMB_1_MAX_WRITES, - data1->max_writes); + val |= reg_encode(reg, GEN_QMB_1_MAX_WRITES, data1->max_writes); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); /* Max outstanding read accesses for QSB masters */ reg = ipa_reg(ipa, QSB_MAX_READS); - val = ipa_reg_encode(reg, GEN_QMB_0_MAX_READS, data0->max_reads); + val = reg_encode(reg, GEN_QMB_0_MAX_READS, data0->max_reads); if (ipa->version >= IPA_VERSION_4_0) - val |= ipa_reg_encode(reg, GEN_QMB_0_MAX_READS_BEATS, - data0->max_reads_beats); + val |= reg_encode(reg, GEN_QMB_0_MAX_READS_BEATS, + data0->max_reads_beats); if (data->qsb_count > 1) { - val = ipa_reg_encode(reg, GEN_QMB_1_MAX_READS, - data1->max_reads); + val = reg_encode(reg, GEN_QMB_1_MAX_READS, data1->max_reads); if (ipa->version >= IPA_VERSION_4_0) - val |= ipa_reg_encode(reg, GEN_QMB_1_MAX_READS_BEATS, - data1->max_reads_beats); + val |= reg_encode(reg, GEN_QMB_1_MAX_READS_BEATS, + data1->max_reads_beats); } - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } /* The internal inactivity timer clock is used for the aggregation timer */ @@ -368,46 +367,47 @@ static __always_inline u32 ipa_aggr_granularity_val(u32 usec) */ static void ipa_qtime_config(struct ipa *ipa) { - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; /* Timer clock divider must be disabled when we change the rate */ reg = ipa_reg(ipa, TIMERS_XO_CLK_DIV_CFG); - iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(0, ipa->reg_virt + reg_offset(reg)); reg = ipa_reg(ipa, QTIME_TIMESTAMP_CFG); /* Set DPL time stamp resolution to use Qtime (instead of 1 msec) */ - val = ipa_reg_encode(reg, DPL_TIMESTAMP_LSB, DPL_TIMESTAMP_SHIFT); - val |= ipa_reg_bit(reg, DPL_TIMESTAMP_SEL); + val = reg_encode(reg, DPL_TIMESTAMP_LSB, DPL_TIMESTAMP_SHIFT); + val |= reg_bit(reg, DPL_TIMESTAMP_SEL); /* Configure tag and NAT Qtime timestamp resolution as well */ - val = ipa_reg_encode(reg, TAG_TIMESTAMP_LSB, TAG_TIMESTAMP_SHIFT); - val = ipa_reg_encode(reg, NAT_TIMESTAMP_LSB, NAT_TIMESTAMP_SHIFT); + val = reg_encode(reg, TAG_TIMESTAMP_LSB, TAG_TIMESTAMP_SHIFT); + val = reg_encode(reg, NAT_TIMESTAMP_LSB, NAT_TIMESTAMP_SHIFT); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); /* Set granularity of pulse generators used for other timers */ reg = ipa_reg(ipa, TIMERS_PULSE_GRAN_CFG); - val = ipa_reg_encode(reg, PULSE_GRAN_0, IPA_GRAN_100_US); - val |= ipa_reg_encode(reg, PULSE_GRAN_1, IPA_GRAN_1_MS); + val = reg_encode(reg, PULSE_GRAN_0, IPA_GRAN_100_US); + val |= reg_encode(reg, PULSE_GRAN_1, IPA_GRAN_1_MS); if (ipa->version >= IPA_VERSION_5_0) { - val |= ipa_reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_10_MS); - val |= ipa_reg_encode(reg, PULSE_GRAN_3, IPA_GRAN_10_MS); + val |= reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_10_MS); + val |= reg_encode(reg, PULSE_GRAN_3, IPA_GRAN_10_MS); } else { - val |= ipa_reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_1_MS); + val |= reg_encode(reg, PULSE_GRAN_2, IPA_GRAN_1_MS); } - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); /* Actual divider is 1 more than value supplied here */ reg = ipa_reg(ipa, TIMERS_XO_CLK_DIV_CFG); - offset = ipa_reg_offset(reg); - val = ipa_reg_encode(reg, DIV_VALUE, IPA_XO_CLOCK_DIVIDER - 1); + offset = reg_offset(reg); + + val = reg_encode(reg, DIV_VALUE, IPA_XO_CLOCK_DIVIDER - 1); iowrite32(val, ipa->reg_virt + offset); /* Divider value is set; re-enable the common timer clock divider */ - val |= ipa_reg_bit(reg, DIV_ENABLE); + val |= reg_bit(reg, DIV_ENABLE); iowrite32(val, ipa->reg_virt + offset); } @@ -416,13 +416,13 @@ static void ipa_qtime_config(struct ipa *ipa) static void ipa_hardware_config_counter(struct ipa *ipa) { u32 granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY); - const struct ipa_reg *reg; + const struct reg *reg; u32 val; reg = ipa_reg(ipa, COUNTER_CFG); /* If defined, EOT_COAL_GRANULARITY is 0 */ - val = ipa_reg_encode(reg, AGGR_GRANULARITY, granularity); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + val = reg_encode(reg, AGGR_GRANULARITY, granularity); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } static void ipa_hardware_config_timing(struct ipa *ipa) @@ -435,7 +435,7 @@ static void ipa_hardware_config_timing(struct ipa *ipa) static void ipa_hardware_config_hashing(struct ipa *ipa) { - const struct ipa_reg *reg; + const struct reg *reg; /* Other than IPA v4.2, all versions enable "hashing". Starting * with IPA v5.0, the filter and router tables are implemented @@ -451,26 +451,26 @@ static void ipa_hardware_config_hashing(struct ipa *ipa) /* IPV6_ROUTER_HASH, IPV6_FILTER_HASH, IPV4_ROUTER_HASH, * IPV4_FILTER_HASH are all zero. */ - iowrite32(0, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(0, ipa->reg_virt + reg_offset(reg)); } static void ipa_idle_indication_cfg(struct ipa *ipa, u32 enter_idle_debounce_thresh, bool const_non_idle_enable) { - const struct ipa_reg *reg; + const struct reg *reg; u32 val; if (ipa->version < IPA_VERSION_3_5_1) return; reg = ipa_reg(ipa, IDLE_INDICATION_CFG); - val = ipa_reg_encode(reg, ENTER_IDLE_DEBOUNCE_THRESH, - enter_idle_debounce_thresh); + val = reg_encode(reg, ENTER_IDLE_DEBOUNCE_THRESH, + enter_idle_debounce_thresh); if (const_non_idle_enable) - val |= ipa_reg_bit(reg, CONST_NON_IDLE_ENABLE); + val |= reg_bit(reg, CONST_NON_IDLE_ENABLE); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } /** diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c index a07776e20cb0..85096d1efe5b 100644 --- a/drivers/net/ipa/ipa_mem.c +++ b/drivers/net/ipa/ipa_mem.c @@ -75,7 +75,7 @@ ipa_mem_zero_region_add(struct gsi_trans *trans, enum ipa_mem_id mem_id) int ipa_mem_setup(struct ipa *ipa) { dma_addr_t addr = ipa->zero_addr; - const struct ipa_reg *reg; + const struct reg *reg; const struct ipa_mem *mem; struct gsi_trans *trans; u32 offset; @@ -115,8 +115,8 @@ int ipa_mem_setup(struct ipa *ipa) offset = ipa->mem_offset + mem->offset; reg = ipa_reg(ipa, LOCAL_PKT_PROC_CNTXT); - val = ipa_reg_encode(reg, IPA_BASE_ADDR, offset); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + val = reg_encode(reg, IPA_BASE_ADDR, offset); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); return 0; } @@ -318,8 +318,8 @@ static bool ipa_mem_size_valid(struct ipa *ipa) int ipa_mem_config(struct ipa *ipa) { struct device *dev = &ipa->pdev->dev; - const struct ipa_reg *reg; const struct ipa_mem *mem; + const struct reg *reg; dma_addr_t addr; u32 mem_size; void *virt; @@ -328,13 +328,13 @@ int ipa_mem_config(struct ipa *ipa) /* Check the advertised location and size of the shared memory area */ reg = ipa_reg(ipa, SHARED_MEM_SIZE); - val = ioread32(ipa->reg_virt + ipa_reg_offset(reg)); + val = ioread32(ipa->reg_virt + reg_offset(reg)); /* The fields in the register are in 8 byte units */ - ipa->mem_offset = 8 * ipa_reg_decode(reg, MEM_BADDR, val); + ipa->mem_offset = 8 * reg_decode(reg, MEM_BADDR, val); /* Make sure the end is within the region's mapped space */ - mem_size = 8 * ipa_reg_decode(reg, MEM_SIZE, val); + mem_size = 8 * reg_decode(reg, MEM_SIZE, val); /* If the sizes don't match, issue a warning */ if (ipa->mem_offset + mem_size < ipa->mem_size) { diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c index ddd529153e15..735fa6591609 100644 --- a/drivers/net/ipa/ipa_reg.c +++ b/drivers/net/ipa/ipa_reg.c @@ -9,73 +9,96 @@ #include "ipa.h" #include "ipa_reg.h" -/* Is this register valid and defined for the current IPA version? */ -static bool ipa_reg_valid(struct ipa *ipa, enum ipa_reg_id reg_id) +/* Is this register ID valid for the current IPA version? */ +static bool ipa_reg_id_valid(struct ipa *ipa, enum ipa_reg_id reg_id) { enum ipa_version version = ipa->version; - bool valid; - - /* Check for bogus (out of range) register IDs */ - if ((u32)reg_id >= ipa->regs->reg_count) - return false; switch (reg_id) { case IPA_BCR: case COUNTER_CFG: - valid = version < IPA_VERSION_4_5; - break; + return version < IPA_VERSION_4_5; case IPA_TX_CFG: case FLAVOR_0: case IDLE_INDICATION_CFG: - valid = version >= IPA_VERSION_3_5; - break; + return version >= IPA_VERSION_3_5; case QTIME_TIMESTAMP_CFG: case TIMERS_XO_CLK_DIV_CFG: case TIMERS_PULSE_GRAN_CFG: - valid = version >= IPA_VERSION_4_5; - break; + return version >= IPA_VERSION_4_5; case SRC_RSRC_GRP_45_RSRC_TYPE: case DST_RSRC_GRP_45_RSRC_TYPE: - valid = version <= IPA_VERSION_3_1 || - version == IPA_VERSION_4_5; - break; + return version <= IPA_VERSION_3_1 || + version == IPA_VERSION_4_5; case SRC_RSRC_GRP_67_RSRC_TYPE: case DST_RSRC_GRP_67_RSRC_TYPE: - valid = version <= IPA_VERSION_3_1; - break; + return version <= IPA_VERSION_3_1; case ENDP_FILTER_ROUTER_HSH_CFG: - valid = version != IPA_VERSION_4_2; - break; + return version != IPA_VERSION_4_2; case IRQ_SUSPEND_EN: case IRQ_SUSPEND_CLR: - valid = version >= IPA_VERSION_3_1; - break; + return version >= IPA_VERSION_3_1; + + case COMP_CFG: + case CLKON_CFG: + case ROUTE: + case SHARED_MEM_SIZE: + case QSB_MAX_WRITES: + case QSB_MAX_READS: + case FILT_ROUT_HASH_EN: + case FILT_ROUT_CACHE_CFG: + case FILT_ROUT_HASH_FLUSH: + case FILT_ROUT_CACHE_FLUSH: + case STATE_AGGR_ACTIVE: + case LOCAL_PKT_PROC_CNTXT: + case AGGR_FORCE_CLOSE: + case SRC_RSRC_GRP_01_RSRC_TYPE: + case SRC_RSRC_GRP_23_RSRC_TYPE: + case DST_RSRC_GRP_01_RSRC_TYPE: + case DST_RSRC_GRP_23_RSRC_TYPE: + case ENDP_INIT_CTRL: + case ENDP_INIT_CFG: + case ENDP_INIT_NAT: + case ENDP_INIT_HDR: + case ENDP_INIT_HDR_EXT: + case ENDP_INIT_HDR_METADATA_MASK: + case ENDP_INIT_MODE: + case ENDP_INIT_AGGR: + case ENDP_INIT_HOL_BLOCK_EN: + case ENDP_INIT_HOL_BLOCK_TIMER: + case ENDP_INIT_DEAGGR: + case ENDP_INIT_RSRC_GRP: + case ENDP_INIT_SEQ: + case ENDP_STATUS: + case ENDP_FILTER_CACHE_CFG: + case ENDP_ROUTER_CACHE_CFG: + case IPA_IRQ_STTS: + case IPA_IRQ_EN: + case IPA_IRQ_CLR: + case IPA_IRQ_UC: + case IRQ_SUSPEND_INFO: + return true; /* These should be defined for all versions */ default: - valid = true; /* Others should be defined for all versions */ - break; + return false; } - - /* To be valid, it must be defined */ - - return valid && ipa->regs->reg[reg_id]; } -const struct ipa_reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id) +const struct reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id) { - if (WARN_ON(!ipa_reg_valid(ipa, reg_id))) + if (WARN(!ipa_reg_id_valid(ipa, reg_id), "invalid reg %u\n", reg_id)) return NULL; - return ipa->regs->reg[reg_id]; + return reg(ipa->regs, reg_id); } -static const struct ipa_regs *ipa_regs(enum ipa_version version) +static const struct regs *ipa_regs(enum ipa_version version) { switch (version) { case IPA_VERSION_3_1: @@ -100,7 +123,7 @@ static const struct ipa_regs *ipa_regs(enum ipa_version version) int ipa_reg_init(struct ipa *ipa) { struct device *dev = &ipa->pdev->dev; - const struct ipa_regs *regs; + const struct regs *regs; struct resource *res; regs = ipa_regs(ipa->version); @@ -123,7 +146,6 @@ int ipa_reg_init(struct ipa *ipa) dev_err(dev, "unable to remap \"ipa-reg\" memory\n"); return -ENOMEM; } - ipa->reg_addr = res->start; ipa->regs = regs; return 0; diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h index 82d43eca170e..28aa1351dd48 100644 --- a/drivers/net/ipa/ipa_reg.h +++ b/drivers/net/ipa/ipa_reg.h @@ -10,6 +10,7 @@ #include <linux/bug.h> #include "ipa_version.h" +#include "reg.h" struct ipa; @@ -35,7 +36,7 @@ struct ipa; * by register ID. Each entry in the array specifies the base offset and * (for parameterized registers) a non-zero stride value. Not all versions * of IPA define all registers. The offset for a register is returned by - * ipa_reg_offset() when the register's ipa_reg structure is supplied; + * reg_offset() when the register's ipa_reg structure is supplied; * zero is returned for an undefined register (this should never happen). * * Some registers encode multiple fields within them. Each field in @@ -44,9 +45,9 @@ struct ipa; * an array of field masks, indexed by field ID. Two functions are * used to access register fields; both take an ipa_reg structure as * argument. To encode a value to be represented in a register field, - * the value and field ID are passed to ipa_reg_encode(). To extract + * the value and field ID are passed to reg_encode(). To extract * a value encoded in a register field, the field ID is passed to - * ipa_reg_decode(). In addition, for single-bit fields, ipa_reg_bit() + * reg_decode(). In addition, for single-bit fields, reg_bit() * can be used to either encode the bit value, or to generate a mask * used to extract the bit value. */ @@ -110,56 +111,6 @@ enum ipa_reg_id { IPA_REG_ID_COUNT, /* Last; not an ID */ }; -/** - * struct ipa_reg - An IPA register descriptor - * @offset: Register offset relative to base of the "ipa-reg" memory - * @stride: Distance between two instances, if parameterized - * @fcount: Number of entries in the @fmask array - * @fmask: Array of mask values defining position and width of fields - * @name: Upper-case name of the IPA register - */ -struct ipa_reg { - u32 offset; - u32 stride; - u32 fcount; - const u32 *fmask; /* BIT(nr) or GENMASK(h, l) */ - const char *name; -}; - -/* Helper macro for defining "simple" (non-parameterized) registers */ -#define IPA_REG(__NAME, __reg_id, __offset) \ - IPA_REG_STRIDE(__NAME, __reg_id, __offset, 0) - -/* Helper macro for defining parameterized registers, specifying stride */ -#define IPA_REG_STRIDE(__NAME, __reg_id, __offset, __stride) \ - static const struct ipa_reg ipa_reg_ ## __reg_id = { \ - .name = #__NAME, \ - .offset = __offset, \ - .stride = __stride, \ - } - -#define IPA_REG_FIELDS(__NAME, __name, __offset) \ - IPA_REG_STRIDE_FIELDS(__NAME, __name, __offset, 0) - -#define IPA_REG_STRIDE_FIELDS(__NAME, __name, __offset, __stride) \ - static const struct ipa_reg ipa_reg_ ## __name = { \ - .name = #__NAME, \ - .offset = __offset, \ - .stride = __stride, \ - .fcount = ARRAY_SIZE(ipa_reg_ ## __name ## _fmask), \ - .fmask = ipa_reg_ ## __name ## _fmask, \ - } - -/** - * struct ipa_regs - Description of registers supported by hardware - * @reg_count: Number of registers in the @reg[] array - * @reg: Array of register descriptors - */ -struct ipa_regs { - u32 reg_count; - const struct ipa_reg **reg; -}; - /* COMP_CFG register */ enum ipa_reg_comp_cfg_field_id { COMP_CFG_ENABLE, /* Not IPA v4.0+ */ @@ -687,79 +638,15 @@ enum ipa_reg_ipa_irq_uc_field_id { UC_INTR, }; -extern const struct ipa_regs ipa_regs_v3_1; -extern const struct ipa_regs ipa_regs_v3_5_1; -extern const struct ipa_regs ipa_regs_v4_2; -extern const struct ipa_regs ipa_regs_v4_5; -extern const struct ipa_regs ipa_regs_v4_7; -extern const struct ipa_regs ipa_regs_v4_9; -extern const struct ipa_regs ipa_regs_v4_11; - -/* Return the field mask for a field in a register */ -static inline u32 ipa_reg_fmask(const struct ipa_reg *reg, u32 field_id) -{ - if (!reg || WARN_ON(field_id >= reg->fcount)) - return 0; - - return reg->fmask[field_id]; -} - -/* Return the mask for a single-bit field in a register */ -static inline u32 ipa_reg_bit(const struct ipa_reg *reg, u32 field_id) -{ - u32 fmask = ipa_reg_fmask(reg, field_id); - - WARN_ON(!is_power_of_2(fmask)); - - return fmask; -} - -/* Encode a value into the given field of a register */ -static inline u32 -ipa_reg_encode(const struct ipa_reg *reg, u32 field_id, u32 val) -{ - u32 fmask = ipa_reg_fmask(reg, field_id); - - if (!fmask) - return 0; - - val <<= __ffs(fmask); - if (WARN_ON(val & ~fmask)) - return 0; - - return val; -} - -/* Given a register value, decode (extract) the value in the given field */ -static inline u32 -ipa_reg_decode(const struct ipa_reg *reg, u32 field_id, u32 val) -{ - u32 fmask = ipa_reg_fmask(reg, field_id); - - return fmask ? (val & fmask) >> __ffs(fmask) : 0; -} - -/* Return the maximum value representable by the given field; always 2^n - 1 */ -static inline u32 ipa_reg_field_max(const struct ipa_reg *reg, u32 field_id) -{ - u32 fmask = ipa_reg_fmask(reg, field_id); - - return fmask ? fmask >> __ffs(fmask) : 0; -} - -const struct ipa_reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id); - -/* Returns 0 for NULL reg; warning will have already been issued */ -static inline u32 ipa_reg_offset(const struct ipa_reg *reg) -{ - return reg ? reg->offset : 0; -} +extern const struct regs ipa_regs_v3_1; +extern const struct regs ipa_regs_v3_5_1; +extern const struct regs ipa_regs_v4_2; +extern const struct regs ipa_regs_v4_5; +extern const struct regs ipa_regs_v4_7; +extern const struct regs ipa_regs_v4_9; +extern const struct regs ipa_regs_v4_11; -/* Returns 0 for NULL reg; warning will have already been issued */ -static inline u32 ipa_reg_n_offset(const struct ipa_reg *reg, u32 n) -{ - return reg ? reg->offset + n * reg->stride : 0; -} +const struct reg *ipa_reg(struct ipa *ipa, enum ipa_reg_id reg_id); int ipa_reg_init(struct ipa *ipa); void ipa_reg_exit(struct ipa *ipa); diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c index a257f0e5e361..82c88a744d10 100644 --- a/drivers/net/ipa/ipa_resource.c +++ b/drivers/net/ipa/ipa_resource.c @@ -70,20 +70,20 @@ static bool ipa_resource_limits_valid(struct ipa *ipa, static void ipa_resource_config_common(struct ipa *ipa, u32 resource_type, - const struct ipa_reg *reg, + const struct reg *reg, const struct ipa_resource_limits *xlimits, const struct ipa_resource_limits *ylimits) { u32 val; - val = ipa_reg_encode(reg, X_MIN_LIM, xlimits->min); - val |= ipa_reg_encode(reg, X_MAX_LIM, xlimits->max); + val = reg_encode(reg, X_MIN_LIM, xlimits->min); + val |= reg_encode(reg, X_MAX_LIM, xlimits->max); if (ylimits) { - val |= ipa_reg_encode(reg, Y_MIN_LIM, ylimits->min); - val |= ipa_reg_encode(reg, Y_MAX_LIM, ylimits->max); + val |= reg_encode(reg, Y_MIN_LIM, ylimits->min); + val |= reg_encode(reg, Y_MAX_LIM, ylimits->max); } - iowrite32(val, ipa->reg_virt + ipa_reg_n_offset(reg, resource_type)); + iowrite32(val, ipa->reg_virt + reg_n_offset(reg, resource_type)); } static void ipa_resource_config_src(struct ipa *ipa, u32 resource_type, @@ -92,7 +92,7 @@ static void ipa_resource_config_src(struct ipa *ipa, u32 resource_type, u32 group_count = data->rsrc_group_src_count; const struct ipa_resource_limits *ylimits; const struct ipa_resource *resource; - const struct ipa_reg *reg; + const struct reg *reg; resource = &data->resource_src[resource_type]; @@ -129,7 +129,7 @@ static void ipa_resource_config_dst(struct ipa *ipa, u32 resource_type, u32 group_count = data->rsrc_group_dst_count; const struct ipa_resource_limits *ylimits; const struct ipa_resource *resource; - const struct ipa_reg *reg; + const struct reg *reg; resource = &data->resource_dst[resource_type]; diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c index b9d505191f88..f0529c31d0b6 100644 --- a/drivers/net/ipa/ipa_table.c +++ b/drivers/net/ipa/ipa_table.c @@ -345,9 +345,8 @@ void ipa_table_reset(struct ipa *ipa, bool modem) int ipa_table_hash_flush(struct ipa *ipa) { - const struct ipa_reg *reg; struct gsi_trans *trans; - u32 offset; + const struct reg *reg; u32 val; if (!ipa_table_hash_support(ipa)) @@ -361,22 +360,20 @@ int ipa_table_hash_flush(struct ipa *ipa) if (ipa->version < IPA_VERSION_5_0) { reg = ipa_reg(ipa, FILT_ROUT_HASH_FLUSH); - offset = ipa_reg_offset(reg); - val = ipa_reg_bit(reg, IPV6_ROUTER_HASH); - val |= ipa_reg_bit(reg, IPV6_FILTER_HASH); - val |= ipa_reg_bit(reg, IPV4_ROUTER_HASH); - val |= ipa_reg_bit(reg, IPV4_FILTER_HASH); + val = reg_bit(reg, IPV6_ROUTER_HASH); + val |= reg_bit(reg, IPV6_FILTER_HASH); + val |= reg_bit(reg, IPV4_ROUTER_HASH); + val |= reg_bit(reg, IPV4_FILTER_HASH); } else { reg = ipa_reg(ipa, FILT_ROUT_CACHE_FLUSH); - offset = ipa_reg_offset(reg); /* IPA v5.0+ uses a unified cache (both IPv4 and IPv6) */ - val = ipa_reg_bit(reg, ROUTER_CACHE); - val |= ipa_reg_bit(reg, FILTER_CACHE); + val = reg_bit(reg, ROUTER_CACHE); + val |= reg_bit(reg, FILTER_CACHE); } - ipa_cmd_register_write_add(trans, offset, val, val, false); + ipa_cmd_register_write_add(trans, reg_offset(reg), val, val, false); gsi_trans_commit_wait(trans); @@ -495,22 +492,22 @@ static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint) { u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; if (ipa->version < IPA_VERSION_5_0) { reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG); - offset = ipa_reg_n_offset(reg, endpoint_id); + offset = reg_n_offset(reg, endpoint_id); val = ioread32(endpoint->ipa->reg_virt + offset); /* Zero all filter-related fields, preserving the rest */ - val &= ~ipa_reg_fmask(reg, FILTER_HASH_MSK_ALL); + val &= ~reg_fmask(reg, FILTER_HASH_MSK_ALL); } else { /* IPA v5.0 separates filter and router cache configuration */ reg = ipa_reg(ipa, ENDP_FILTER_CACHE_CFG); - offset = ipa_reg_n_offset(reg, endpoint_id); + offset = reg_n_offset(reg, endpoint_id); /* Zero all filter-related fields */ val = 0; @@ -554,22 +551,22 @@ static bool ipa_route_id_modem(struct ipa *ipa, u32 route_id) */ static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id) { - const struct ipa_reg *reg; + const struct reg *reg; u32 offset; u32 val; if (ipa->version < IPA_VERSION_5_0) { reg = ipa_reg(ipa, ENDP_FILTER_ROUTER_HSH_CFG); - offset = ipa_reg_n_offset(reg, route_id); + offset = reg_n_offset(reg, route_id); val = ioread32(ipa->reg_virt + offset); /* Zero all route-related fields, preserving the rest */ - val &= ~ipa_reg_fmask(reg, ROUTER_HASH_MSK_ALL); + val &= ~reg_fmask(reg, ROUTER_HASH_MSK_ALL); } else { /* IPA v5.0 separates filter and router cache configuration */ reg = ipa_reg(ipa, ENDP_ROUTER_CACHE_CFG); - offset = ipa_reg_n_offset(reg, route_id); + offset = reg_n_offset(reg, route_id); /* Zero all route-related fields */ val = 0; diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c index cb8a76a75f21..7eaa0b4ebed9 100644 --- a/drivers/net/ipa/ipa_uc.c +++ b/drivers/net/ipa/ipa_uc.c @@ -231,7 +231,7 @@ void ipa_uc_power(struct ipa *ipa) static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param) { struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa); - const struct ipa_reg *reg; + const struct reg *reg; u32 val; /* Fill in the command data */ @@ -243,9 +243,9 @@ static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param) /* Use an interrupt to tell the microcontroller the command is ready */ reg = ipa_reg(ipa, IPA_IRQ_UC); - val = ipa_reg_bit(reg, UC_INTR); + val = reg_bit(reg, UC_INTR); - iowrite32(val, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(val, ipa->reg_virt + reg_offset(reg)); } /* Tell the microcontroller the AP is shutting down */ diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h index d15821467743..06e75b8ece7e 100644 --- a/drivers/net/ipa/ipa_version.h +++ b/drivers/net/ipa/ipa_version.h @@ -9,7 +9,7 @@ /** * enum ipa_version * @IPA_VERSION_3_0: IPA version 3.0/GSI version 1.0 - * @IPA_VERSION_3_1: IPA version 3.1/GSI version 1.1 + * @IPA_VERSION_3_1: IPA version 3.1/GSI version 1.0 * @IPA_VERSION_3_5: IPA version 3.5/GSI version 1.2 * @IPA_VERSION_3_5_1: IPA version 3.5.1/GSI version 1.3 * @IPA_VERSION_4_0: IPA version 4.0/GSI version 2.0 @@ -20,6 +20,8 @@ * @IPA_VERSION_4_9: IPA version 4.9/GSI version 2.9 * @IPA_VERSION_4_11: IPA version 4.11/GSI version 2.11 (2.1.1) * @IPA_VERSION_5_0: IPA version 5.0/GSI version 3.0 + * @IPA_VERSION_5_1: IPA version 5.1/GSI version 3.0 + * @IPA_VERSION_5_5: IPA version 5.5/GSI version 5.5 * @IPA_VERSION_COUNT: Number of defined IPA versions * * Defines the version of IPA (and GSI) hardware present on the platform. @@ -38,6 +40,8 @@ enum ipa_version { IPA_VERSION_4_9, IPA_VERSION_4_11, IPA_VERSION_5_0, + IPA_VERSION_5_1, + IPA_VERSION_5_5, IPA_VERSION_COUNT, /* Last; not a version */ }; diff --git a/drivers/net/ipa/reg.h b/drivers/net/ipa/reg.h new file mode 100644 index 000000000000..57b457f39b6e --- /dev/null +++ b/drivers/net/ipa/reg.h @@ -0,0 +1,133 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* *Copyright (C) 2022-2023 Linaro Ltd. */ + +#ifndef _REG_H_ +#define _REG_H_ + +#include <linux/types.h> +#include <linux/bits.h> + +/** + * struct reg - A register descriptor + * @offset: Register offset relative to base of register memory + * @stride: Distance between two instances, if parameterized + * @fcount: Number of entries in the @fmask array + * @fmask: Array of mask values defining position and width of fields + * @name: Upper-case name of the register + */ +struct reg { + u32 offset; + u32 stride; + u32 fcount; + const u32 *fmask; /* BIT(nr) or GENMASK(h, l) */ + const char *name; +}; + +/* Helper macro for defining "simple" (non-parameterized) registers */ +#define REG(__NAME, __reg_id, __offset) \ + REG_STRIDE(__NAME, __reg_id, __offset, 0) + +/* Helper macro for defining parameterized registers, specifying stride */ +#define REG_STRIDE(__NAME, __reg_id, __offset, __stride) \ + static const struct reg reg_ ## __reg_id = { \ + .name = #__NAME, \ + .offset = __offset, \ + .stride = __stride, \ + } + +#define REG_FIELDS(__NAME, __name, __offset) \ + REG_STRIDE_FIELDS(__NAME, __name, __offset, 0) + +#define REG_STRIDE_FIELDS(__NAME, __name, __offset, __stride) \ + static const struct reg reg_ ## __name = { \ + .name = #__NAME, \ + .offset = __offset, \ + .stride = __stride, \ + .fcount = ARRAY_SIZE(reg_ ## __name ## _fmask), \ + .fmask = reg_ ## __name ## _fmask, \ + } + +/** + * struct regs - Description of registers supported by hardware + * @reg_count: Number of registers in the @reg[] array + * @reg: Array of register descriptors + */ +struct regs { + u32 reg_count; + const struct reg **reg; +}; + +static inline const struct reg *reg(const struct regs *regs, u32 reg_id) +{ + if (WARN(reg_id >= regs->reg_count, + "reg out of range (%u > %u)\n", reg_id, regs->reg_count - 1)) + return NULL; + + return regs->reg[reg_id]; +} + +/* Return the field mask for a field in a register, or 0 on error */ +static inline u32 reg_fmask(const struct reg *reg, u32 field_id) +{ + if (!reg || WARN_ON(field_id >= reg->fcount)) + return 0; + + return reg->fmask[field_id]; +} + +/* Return the mask for a single-bit field in a register, or 0 on error */ +static inline u32 reg_bit(const struct reg *reg, u32 field_id) +{ + u32 fmask = reg_fmask(reg, field_id); + + if (WARN_ON(!is_power_of_2(fmask))) + return 0; + + return fmask; +} + +/* Return the maximum value representable by the given field; always 2^n - 1 */ +static inline u32 reg_field_max(const struct reg *reg, u32 field_id) +{ + u32 fmask = reg_fmask(reg, field_id); + + return fmask ? fmask >> __ffs(fmask) : 0; +} + +/* Encode a value into the given field of a register */ +static inline u32 reg_encode(const struct reg *reg, u32 field_id, u32 val) +{ + u32 fmask = reg_fmask(reg, field_id); + + if (!fmask) + return 0; + + val <<= __ffs(fmask); + if (WARN_ON(val & ~fmask)) + return 0; + + return val; +} + +/* Given a register value, decode (extract) the value in the given field */ +static inline u32 reg_decode(const struct reg *reg, u32 field_id, u32 val) +{ + u32 fmask = reg_fmask(reg, field_id); + + return fmask ? (val & fmask) >> __ffs(fmask) : 0; +} + +/* Returns 0 for NULL reg; warning should have already been issued */ +static inline u32 reg_offset(const struct reg *reg) +{ + return reg ? reg->offset : 0; +} + +/* Returns 0 for NULL reg; warning should have already been issued */ +static inline u32 reg_n_offset(const struct reg *reg, u32 n) +{ + return reg ? reg->offset + n * reg->stride : 0; +} + +#endif /* _REG_H_ */ diff --git a/drivers/net/ipa/reg/ipa_reg-v3.1.c b/drivers/net/ipa/reg/ipa_reg-v3.1.c index 677ece3bce9e..648dbfe1fce3 100644 --- a/drivers/net/ipa/reg/ipa_reg-v3.1.c +++ b/drivers/net/ipa/reg/ipa_reg-v3.1.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { [COMP_CFG_ENABLE] = BIT(0), [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -16,9 +16,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { /* Bits 5-31 reserved */ }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -39,9 +39,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -52,31 +52,31 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -87,9 +87,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -100,121 +100,121 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004); -IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0); +REG(IPA_BCR, ipa_bcr, 0x000001d0); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(16, 0), /* Bits 17-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_counter_cfg_fmask[] = { +static const u32 reg_counter_cfg_fmask[] = { [EOT_COAL_GRANULARITY] = GENMASK(3, 0), [AGGR_GRANULARITY] = GENMASK(8, 4), /* Bits 5-31 reserved */ }; -IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); +REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_45_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type, - 0x00000408, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type, + 0x00000408, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_67_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_67_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type, - 0x0000040c, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_67_RSRC_TYPE, src_rsrc_grp_67_rsrc_type, + 0x0000040c, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_45_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type, - 0x00000508, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type, + 0x00000508, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_67_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_67_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(7, 0), [X_MAX_LIM] = GENMASK(15, 8), [Y_MIN_LIM] = GENMASK(23, 16), [Y_MAX_LIM] = GENMASK(31, 24), }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type, - 0x0000050c, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_67_RSRC_TYPE, dst_rsrc_grp_67_rsrc_type, + 0x0000050c, 0x0020); -static const u32 ipa_reg_endp_init_ctrl_fmask[] = { +static const u32 reg_endp_init_ctrl_fmask[] = { [ENDP_SUSPEND] = BIT(0), [ENDP_DELAY] = BIT(1), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -223,16 +223,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -245,9 +245,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { /* Bits 29-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -257,12 +257,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 14-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), /* Bit 3 reserved */ [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -274,9 +274,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(9, 5), @@ -289,25 +289,25 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); /* Entire register is a tick count */ -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_BASE_VALUE] = GENMASK(31, 0), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -317,25 +317,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = GENMASK(2, 0), /* Bits 3-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), [SEQ_REP_TYPE] = GENMASK(15, 8), /* Bits 16-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-7 reserved */ @@ -343,9 +342,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -366,84 +365,84 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [IPA_BCR] = &ipa_reg_ipa_bcr, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [COUNTER_CFG] = &ipa_reg_counter_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [SRC_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_45_rsrc_type, - [SRC_RSRC_GRP_67_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_67_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_45_rsrc_type, - [DST_RSRC_GRP_67_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_67_rsrc_type, - [ENDP_INIT_CTRL] = &ipa_reg_endp_init_ctrl, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v3_1 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [IPA_BCR] = ®_ipa_bcr, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [COUNTER_CFG] = ®_counter_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [SRC_RSRC_GRP_45_RSRC_TYPE] = ®_src_rsrc_grp_45_rsrc_type, + [SRC_RSRC_GRP_67_RSRC_TYPE] = ®_src_rsrc_grp_67_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_45_RSRC_TYPE] = ®_dst_rsrc_grp_45_rsrc_type, + [DST_RSRC_GRP_67_RSRC_TYPE] = ®_dst_rsrc_grp_67_rsrc_type, + [ENDP_INIT_CTRL] = ®_endp_init_ctrl, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v3_1 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c index b9c6a50de243..78b1bf60cd02 100644 --- a/drivers/net/ipa/reg/ipa_reg-v3.5.1.c +++ b/drivers/net/ipa/reg/ipa_reg-v3.5.1.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { [COMP_CFG_ENABLE] = BIT(0), [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -16,9 +16,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { /* Bits 5-31 reserved */ }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -44,9 +44,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -57,31 +57,31 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -92,9 +92,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x000008c); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -105,42 +105,42 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x0000090); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x0000010c, 0x0004); -IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0); +REG(IPA_BCR, ipa_bcr, 0x000001d0); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(16, 0), /* Bits 17-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_counter_cfg_fmask[] = { +static const u32 reg_counter_cfg_fmask[] = { /* Bits 0-3 reserved */ [AGGR_GRANULARITY] = GENMASK(8, 4), /* Bits 5-31 reserved */ }; -IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); +REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { [TX0_PREFETCH_DISABLE] = BIT(0), [TX1_PREFETCH_DISABLE] = BIT(1), [PREFETCH_ALMOST_EMPTY_SIZE] = GENMASK(4, 2), /* Bits 5-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(3, 0), /* Bits 4-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -151,17 +151,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000220); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000220); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -172,10 +172,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -186,10 +186,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -200,10 +200,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -214,18 +214,18 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_endp_init_ctrl_fmask[] = { +static const u32 reg_endp_init_ctrl_fmask[] = { [ENDP_SUSPEND] = BIT(0), [ENDP_DELAY] = BIT(1), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CTRL, endp_init_ctrl, 0x00000800, 0x0070); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -234,16 +234,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -256,9 +256,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { /* Bits 29-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -268,12 +268,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 14-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), /* Bit 3 reserved */ [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -285,9 +285,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(9, 5), @@ -300,25 +300,25 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); /* Entire register is a tick count */ -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_BASE_VALUE] = GENMASK(31, 0), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -328,25 +328,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), [SEQ_REP_TYPE] = GENMASK(15, 8), /* Bits 16-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-7 reserved */ @@ -354,9 +353,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -377,83 +376,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [IPA_BCR] = &ipa_reg_ipa_bcr, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [COUNTER_CFG] = &ipa_reg_counter_cfg, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [ENDP_INIT_CTRL] = &ipa_reg_endp_init_ctrl, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v3_5_1 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [IPA_BCR] = ®_ipa_bcr, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [COUNTER_CFG] = ®_counter_cfg, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [ENDP_INIT_CTRL] = ®_endp_init_ctrl, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v3_5_1 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v4.11.c b/drivers/net/ipa/reg/ipa_reg-v4.11.c index 9a315130530d..29e71cce4a84 100644 --- a/drivers/net/ipa/reg/ipa_reg-v4.11.c +++ b/drivers/net/ipa/reg/ipa_reg-v4.11.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { [RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0), [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -36,9 +36,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { [GEN_QMB_0_DYNAMIC_ASIZE] = BIT(31), }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -73,9 +73,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { [DRBIP] = BIT(31), }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -86,24 +86,24 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), /* Bits 8-15 reserved */ @@ -111,9 +111,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = { [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -124,9 +124,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -137,23 +137,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(17, 0), /* Bits 18-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { /* Bits 0-1 reserved */ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2), [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6), @@ -166,9 +166,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { /* Bits 19-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(4, 0), /* Bits 5-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -179,17 +179,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); -static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { +static const u32 reg_qtime_timestamp_cfg_fmask[] = { [DPL_TIMESTAMP_LSB] = GENMASK(4, 0), /* Bits 5-6 reserved */ [DPL_TIMESTAMP_SEL] = BIT(7), @@ -199,26 +199,26 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { /* Bits 21-31 reserved */ }; -IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); +REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); -static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = { +static const u32 reg_timers_xo_clk_div_cfg_fmask[] = { [DIV_VALUE] = GENMASK(8, 0), /* Bits 9-30 reserved */ [DIV_ENABLE] = BIT(31), }; -IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); +REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); -static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = { +static const u32 reg_timers_pulse_gran_cfg_fmask[] = { [PULSE_GRAN_0] = GENMASK(2, 0), [PULSE_GRAN_1] = GENMASK(5, 3), [PULSE_GRAN_2] = GENMASK(8, 6), /* Bits 9-31 reserved */ }; -IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); +REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -229,10 +229,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -243,10 +243,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -257,10 +257,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -271,10 +271,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -283,16 +283,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -305,9 +305,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { [HDR_OFST_METADATA_MSB] = GENMASK(31, 30), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -321,12 +321,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), [DCPH_ENABLE] = BIT(3), [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -338,9 +338,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(10, 5), @@ -355,27 +355,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_LIMIT] = GENMASK(4, 0), /* Bits 5-7 reserved */ [TIMER_GRAN_SEL] = BIT(8), /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -385,24 +385,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), /* Bits 8-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-8 reserved */ @@ -410,9 +409,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 10-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -433,83 +432,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg, - [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg, - [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v4_11 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [QTIME_TIMESTAMP_CFG] = ®_qtime_timestamp_cfg, + [TIMERS_XO_CLK_DIV_CFG] = ®_timers_xo_clk_div_cfg, + [TIMERS_PULSE_GRAN_CFG] = ®_timers_pulse_gran_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v4_11 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v4.2.c b/drivers/net/ipa/reg/ipa_reg-v4.2.c index 7a95149f8ec7..bb7cf488144d 100644 --- a/drivers/net/ipa/reg/ipa_reg-v4.2.c +++ b/drivers/net/ipa/reg/ipa_reg-v4.2.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { /* Bit 0 reserved */ [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -29,9 +29,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { /* Bits 21-31 reserved */ }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -65,9 +65,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -78,24 +78,24 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), /* Bits 8-15 reserved */ @@ -103,9 +103,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = { [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -116,9 +116,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -129,33 +129,33 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); -IPA_REG(IPA_BCR, ipa_bcr, 0x000001d0); +REG(IPA_BCR, ipa_bcr, 0x000001d0); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(16, 0), /* Bits 17-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_counter_cfg_fmask[] = { +static const u32 reg_counter_cfg_fmask[] = { /* Bits 0-3 reserved */ [AGGR_GRANULARITY] = GENMASK(8, 4), /* Bits 9-31 reserved */ }; -IPA_REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); +REG_FIELDS(COUNTER_CFG, counter_cfg, 0x000001f0); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { /* Bits 0-1 reserved */ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2), [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6), @@ -169,9 +169,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { /* Bits 20-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(3, 0), /* Bits 4-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -182,17 +182,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -203,10 +203,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -217,10 +217,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -231,10 +231,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -245,10 +245,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -257,16 +257,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -279,9 +279,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { /* Bits 29-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -291,12 +291,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 14-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), /* Bit 3 reserved */ [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -308,9 +308,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(9, 5), @@ -323,27 +323,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_BASE_VALUE] = GENMASK(4, 0), /* Bits 5-7 reserved */ [TIMER_SCALE] = GENMASK(12, 8), /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -353,25 +353,24 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), [SEQ_REP_TYPE] = GENMASK(15, 8), /* Bits 16-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-7 reserved */ @@ -380,80 +379,80 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 10-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [IPA_BCR] = &ipa_reg_ipa_bcr, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [COUNTER_CFG] = &ipa_reg_counter_cfg, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v4_2 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [IPA_BCR] = ®_ipa_bcr, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [COUNTER_CFG] = ®_counter_cfg, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v4_2 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v4.5.c b/drivers/net/ipa/reg/ipa_reg-v4.5.c index 587eb8d4e00f..1c58f78851c2 100644 --- a/drivers/net/ipa/reg/ipa_reg-v4.5.c +++ b/drivers/net/ipa/reg/ipa_reg-v4.5.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { /* Bit 0 reserved */ [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -30,9 +30,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -67,9 +67,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -80,24 +80,24 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), /* Bits 8-15 reserved */ @@ -105,9 +105,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = { [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -118,9 +118,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -131,23 +131,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(17, 0), /* Bits 18-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { /* Bits 0-1 reserved */ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2), [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6), @@ -159,9 +159,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { /* Bits 18-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(3, 0), /* Bits 4-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -172,17 +172,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); -static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { +static const u32 reg_qtime_timestamp_cfg_fmask[] = { [DPL_TIMESTAMP_LSB] = GENMASK(4, 0), /* Bits 5-6 reserved */ [DPL_TIMESTAMP_SEL] = BIT(7), @@ -192,25 +192,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { /* Bits 21-31 reserved */ }; -IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); +REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); -static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = { +static const u32 reg_timers_xo_clk_div_cfg_fmask[] = { [DIV_VALUE] = GENMASK(8, 0), /* Bits 9-30 reserved */ [DIV_ENABLE] = BIT(31), }; -IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); +REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); -static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = { +static const u32 reg_timers_pulse_gran_cfg_fmask[] = { [PULSE_GRAN_0] = GENMASK(2, 0), [PULSE_GRAN_1] = GENMASK(5, 3), [PULSE_GRAN_2] = GENMASK(8, 6), }; -IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); +REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -221,10 +221,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -235,10 +235,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_45_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -249,10 +249,10 @@ static const u32 ipa_reg_src_rsrc_grp_45_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type, - 0x00000408, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_45_RSRC_TYPE, src_rsrc_grp_45_rsrc_type, + 0x00000408, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -263,10 +263,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -277,10 +277,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_45_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -291,10 +291,10 @@ static const u32 ipa_reg_dst_rsrc_grp_45_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type, - 0x00000508, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_45_RSRC_TYPE, dst_rsrc_grp_45_rsrc_type, + 0x00000508, 0x0020); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -303,16 +303,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -325,9 +325,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { [HDR_OFST_METADATA_MSB] = GENMASK(31, 30), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -341,12 +341,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), [DCPH_ENABLE] = BIT(3), [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -357,9 +357,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(10, 5), @@ -374,27 +374,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_LIMIT] = GENMASK(4, 0), /* Bits 5-7 reserved */ [TIMER_GRAN_SEL] = BIT(8), /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -404,24 +404,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = GENMASK(2, 0), /* Bits 3-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), /* Bits 8-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-8 reserved */ @@ -429,9 +428,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 10-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -452,85 +451,85 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg, - [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg, - [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [SRC_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_45_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_45_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_45_rsrc_type, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v4_5 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [QTIME_TIMESTAMP_CFG] = ®_qtime_timestamp_cfg, + [TIMERS_XO_CLK_DIV_CFG] = ®_timers_xo_clk_div_cfg, + [TIMERS_PULSE_GRAN_CFG] = ®_timers_pulse_gran_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [SRC_RSRC_GRP_45_RSRC_TYPE] = ®_src_rsrc_grp_45_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_45_RSRC_TYPE] = ®_dst_rsrc_grp_45_rsrc_type, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v4_5 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v4.7.c b/drivers/net/ipa/reg/ipa_reg-v4.7.c index 21f8a58e59a0..731824fce1d4 100644 --- a/drivers/net/ipa/reg/ipa_reg-v4.7.c +++ b/drivers/net/ipa/reg/ipa_reg-v4.7.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { [RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0), [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -30,9 +30,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -67,9 +67,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { [DRBIP] = BIT(31), }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -80,24 +80,24 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), /* Bits 8-15 reserved */ @@ -105,9 +105,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = { [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -118,9 +118,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -131,23 +131,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(17, 0), /* Bits 18-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { /* Bits 0-1 reserved */ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2), [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6), @@ -160,9 +160,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { /* Bits 19-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(3, 0), /* Bits 4-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -173,17 +173,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); -static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { +static const u32 reg_qtime_timestamp_cfg_fmask[] = { [DPL_TIMESTAMP_LSB] = GENMASK(4, 0), /* Bits 5-6 reserved */ [DPL_TIMESTAMP_SEL] = BIT(7), @@ -193,25 +193,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { /* Bits 21-31 reserved */ }; -IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); +REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); -static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = { +static const u32 reg_timers_xo_clk_div_cfg_fmask[] = { [DIV_VALUE] = GENMASK(8, 0), /* Bits 9-30 reserved */ [DIV_ENABLE] = BIT(31), }; -IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); +REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); -static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = { +static const u32 reg_timers_pulse_gran_cfg_fmask[] = { [PULSE_GRAN_0] = GENMASK(2, 0), [PULSE_GRAN_1] = GENMASK(5, 3), [PULSE_GRAN_2] = GENMASK(8, 6), }; -IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); +REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -222,10 +222,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -236,10 +236,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -250,10 +250,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -264,10 +264,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -276,16 +276,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -298,9 +298,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { [HDR_OFST_METADATA_MSB] = GENMASK(31, 30), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -314,12 +314,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), [DCPH_ENABLE] = BIT(3), [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -330,9 +330,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(10, 5), @@ -347,27 +347,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_LIMIT] = GENMASK(4, 0), /* Bits 5-7 reserved */ [TIMER_GRAN_SEL] = BIT(8), /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -377,24 +377,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), /* Bits 8-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-8 reserved */ @@ -402,9 +401,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 10-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -425,83 +424,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00003008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000300c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00003010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000301c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00003030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00003034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg, - [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg, - [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v4_7 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00003038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [QTIME_TIMESTAMP_CFG] = ®_qtime_timestamp_cfg, + [TIMERS_XO_CLK_DIV_CFG] = ®_timers_xo_clk_div_cfg, + [TIMERS_PULSE_GRAN_CFG] = ®_timers_pulse_gran_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v4_7 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/ipa/reg/ipa_reg-v4.9.c b/drivers/net/ipa/reg/ipa_reg-v4.9.c index 1f67a03fe599..01f87b5290e0 100644 --- a/drivers/net/ipa/reg/ipa_reg-v4.9.c +++ b/drivers/net/ipa/reg/ipa_reg-v4.9.c @@ -7,7 +7,7 @@ #include "../ipa.h" #include "../ipa_reg.h" -static const u32 ipa_reg_comp_cfg_fmask[] = { +static const u32 reg_comp_cfg_fmask[] = { [RAM_ARB_PRI_CLIENT_SAMP_FIX_DIS] = BIT(0), [GSI_SNOC_BYPASS_DIS] = BIT(1), [GEN_QMB_0_SNOC_BYPASS_DIS] = BIT(2), @@ -35,9 +35,9 @@ static const u32 ipa_reg_comp_cfg_fmask[] = { [GEN_QMB_0_DYNAMIC_ASIZE] = BIT(31), }; -IPA_REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); +REG_FIELDS(COMP_CFG, comp_cfg, 0x0000003c); -static const u32 ipa_reg_clkon_cfg_fmask[] = { +static const u32 reg_clkon_cfg_fmask[] = { [CLKON_RX] = BIT(0), [CLKON_PROC] = BIT(1), [TX_WRAPPER] = BIT(2), @@ -72,9 +72,9 @@ static const u32 ipa_reg_clkon_cfg_fmask[] = { [DRBIP] = BIT(31), }; -IPA_REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); +REG_FIELDS(CLKON_CFG, clkon_cfg, 0x00000044); -static const u32 ipa_reg_route_fmask[] = { +static const u32 reg_route_fmask[] = { [ROUTE_DIS] = BIT(0), [ROUTE_DEF_PIPE] = GENMASK(5, 1), [ROUTE_DEF_HDR_TABLE] = BIT(6), @@ -85,24 +85,24 @@ static const u32 ipa_reg_route_fmask[] = { /* Bits 25-31 reserved */ }; -IPA_REG_FIELDS(ROUTE, route, 0x00000048); +REG_FIELDS(ROUTE, route, 0x00000048); -static const u32 ipa_reg_shared_mem_size_fmask[] = { +static const u32 reg_shared_mem_size_fmask[] = { [MEM_SIZE] = GENMASK(15, 0), [MEM_BADDR] = GENMASK(31, 16), }; -IPA_REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); +REG_FIELDS(SHARED_MEM_SIZE, shared_mem_size, 0x00000054); -static const u32 ipa_reg_qsb_max_writes_fmask[] = { +static const u32 reg_qsb_max_writes_fmask[] = { [GEN_QMB_0_MAX_WRITES] = GENMASK(3, 0), [GEN_QMB_1_MAX_WRITES] = GENMASK(7, 4), /* Bits 8-31 reserved */ }; -IPA_REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); +REG_FIELDS(QSB_MAX_WRITES, qsb_max_writes, 0x00000074); -static const u32 ipa_reg_qsb_max_reads_fmask[] = { +static const u32 reg_qsb_max_reads_fmask[] = { [GEN_QMB_0_MAX_READS] = GENMASK(3, 0), [GEN_QMB_1_MAX_READS] = GENMASK(7, 4), /* Bits 8-15 reserved */ @@ -110,9 +110,9 @@ static const u32 ipa_reg_qsb_max_reads_fmask[] = { [GEN_QMB_1_MAX_READS_BEATS] = GENMASK(31, 24), }; -IPA_REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); +REG_FIELDS(QSB_MAX_READS, qsb_max_reads, 0x00000078); -static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { +static const u32 reg_filt_rout_hash_en_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -123,9 +123,9 @@ static const u32 ipa_reg_filt_rout_hash_en_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); +REG_FIELDS(FILT_ROUT_HASH_EN, filt_rout_hash_en, 0x0000148); -static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { +static const u32 reg_filt_rout_hash_flush_fmask[] = { [IPV6_ROUTER_HASH] = BIT(0), /* Bits 1-3 reserved */ [IPV6_FILTER_HASH] = BIT(4), @@ -136,23 +136,23 @@ static const u32 ipa_reg_filt_rout_hash_flush_fmask[] = { /* Bits 13-31 reserved */ }; -IPA_REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); +REG_FIELDS(FILT_ROUT_HASH_FLUSH, filt_rout_hash_flush, 0x000014c); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); +REG_STRIDE(STATE_AGGR_ACTIVE, state_aggr_active, 0x000000b4, 0x0004); -static const u32 ipa_reg_local_pkt_proc_cntxt_fmask[] = { +static const u32 reg_local_pkt_proc_cntxt_fmask[] = { [IPA_BASE_ADDR] = GENMASK(17, 0), /* Bits 18-31 reserved */ }; /* Offset must be a multiple of 8 */ -IPA_REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); +REG_FIELDS(LOCAL_PKT_PROC_CNTXT, local_pkt_proc_cntxt, 0x000001e8); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); +REG_STRIDE(AGGR_FORCE_CLOSE, aggr_force_close, 0x000001ec, 0x0004); -static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { +static const u32 reg_ipa_tx_cfg_fmask[] = { /* Bits 0-1 reserved */ [PREFETCH_ALMOST_EMPTY_SIZE_TX0] = GENMASK(5, 2), [DMAW_SCND_OUTSD_PRED_THRESHOLD] = GENMASK(9, 6), @@ -165,9 +165,9 @@ static const u32 ipa_reg_ipa_tx_cfg_fmask[] = { /* Bits 19-31 reserved */ }; -IPA_REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); +REG_FIELDS(IPA_TX_CFG, ipa_tx_cfg, 0x000001fc); -static const u32 ipa_reg_flavor_0_fmask[] = { +static const u32 reg_flavor_0_fmask[] = { [MAX_PIPES] = GENMASK(3, 0), /* Bits 4-7 reserved */ [MAX_CONS_PIPES] = GENMASK(12, 8), @@ -178,17 +178,17 @@ static const u32 ipa_reg_flavor_0_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); +REG_FIELDS(FLAVOR_0, flavor_0, 0x00000210); -static const u32 ipa_reg_idle_indication_cfg_fmask[] = { +static const u32 reg_idle_indication_cfg_fmask[] = { [ENTER_IDLE_DEBOUNCE_THRESH] = GENMASK(15, 0), [CONST_NON_IDLE_ENABLE] = BIT(16), /* Bits 17-31 reserved */ }; -IPA_REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); +REG_FIELDS(IDLE_INDICATION_CFG, idle_indication_cfg, 0x00000240); -static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { +static const u32 reg_qtime_timestamp_cfg_fmask[] = { [DPL_TIMESTAMP_LSB] = GENMASK(4, 0), /* Bits 5-6 reserved */ [DPL_TIMESTAMP_SEL] = BIT(7), @@ -198,25 +198,25 @@ static const u32 ipa_reg_qtime_timestamp_cfg_fmask[] = { /* Bits 21-31 reserved */ }; -IPA_REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); +REG_FIELDS(QTIME_TIMESTAMP_CFG, qtime_timestamp_cfg, 0x0000024c); -static const u32 ipa_reg_timers_xo_clk_div_cfg_fmask[] = { +static const u32 reg_timers_xo_clk_div_cfg_fmask[] = { [DIV_VALUE] = GENMASK(8, 0), /* Bits 9-30 reserved */ [DIV_ENABLE] = BIT(31), }; -IPA_REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); +REG_FIELDS(TIMERS_XO_CLK_DIV_CFG, timers_xo_clk_div_cfg, 0x00000250); -static const u32 ipa_reg_timers_pulse_gran_cfg_fmask[] = { +static const u32 reg_timers_pulse_gran_cfg_fmask[] = { [PULSE_GRAN_0] = GENMASK(2, 0), [PULSE_GRAN_1] = GENMASK(5, 3), [PULSE_GRAN_2] = GENMASK(8, 6), }; -IPA_REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); +REG_FIELDS(TIMERS_PULSE_GRAN_CFG, timers_pulse_gran_cfg, 0x00000254); -static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -227,10 +227,10 @@ static const u32 ipa_reg_src_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, - 0x00000400, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_01_RSRC_TYPE, src_rsrc_grp_01_rsrc_type, + 0x00000400, 0x0020); -static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_src_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -241,10 +241,10 @@ static const u32 ipa_reg_src_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, - 0x00000404, 0x0020); +REG_STRIDE_FIELDS(SRC_RSRC_GRP_23_RSRC_TYPE, src_rsrc_grp_23_rsrc_type, + 0x00000404, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -255,10 +255,10 @@ static const u32 ipa_reg_dst_rsrc_grp_01_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, - 0x00000500, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_01_RSRC_TYPE, dst_rsrc_grp_01_rsrc_type, + 0x00000500, 0x0020); -static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { +static const u32 reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { [X_MIN_LIM] = GENMASK(5, 0), /* Bits 6-7 reserved */ [X_MAX_LIM] = GENMASK(13, 8), @@ -269,10 +269,10 @@ static const u32 ipa_reg_dst_rsrc_grp_23_rsrc_type_fmask[] = { /* Bits 30-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, - 0x00000504, 0x0020); +REG_STRIDE_FIELDS(DST_RSRC_GRP_23_RSRC_TYPE, dst_rsrc_grp_23_rsrc_type, + 0x00000504, 0x0020); -static const u32 ipa_reg_endp_init_cfg_fmask[] = { +static const u32 reg_endp_init_cfg_fmask[] = { [FRAG_OFFLOAD_EN] = BIT(0), [CS_OFFLOAD_EN] = GENMASK(2, 1), [CS_METADATA_HDR_OFFSET] = GENMASK(6, 3), @@ -281,16 +281,16 @@ static const u32 ipa_reg_endp_init_cfg_fmask[] = { /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_CFG, endp_init_cfg, 0x00000808, 0x0070); -static const u32 ipa_reg_endp_init_nat_fmask[] = { +static const u32 reg_endp_init_nat_fmask[] = { [NAT_EN] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_NAT, endp_init_nat, 0x0000080c, 0x0070); -static const u32 ipa_reg_endp_init_hdr_fmask[] = { +static const u32 reg_endp_init_hdr_fmask[] = { [HDR_LEN] = GENMASK(5, 0), [HDR_OFST_METADATA_VALID] = BIT(6), [HDR_OFST_METADATA] = GENMASK(12, 7), @@ -302,9 +302,9 @@ static const u32 ipa_reg_endp_init_hdr_fmask[] = { [HDR_OFST_METADATA_MSB] = GENMASK(31, 30), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR, endp_init_hdr, 0x00000810, 0x0070); -static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { +static const u32 reg_endp_init_hdr_ext_fmask[] = { [HDR_ENDIANNESS] = BIT(0), [HDR_TOTAL_LEN_OR_PAD_VALID] = BIT(1), [HDR_TOTAL_LEN_OR_PAD] = BIT(2), @@ -318,12 +318,12 @@ static const u32 ipa_reg_endp_init_hdr_ext_fmask[] = { /* Bits 22-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HDR_EXT, endp_init_hdr_ext, 0x00000814, 0x0070); -IPA_REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, - 0x00000818, 0x0070); +REG_STRIDE(ENDP_INIT_HDR_METADATA_MASK, endp_init_hdr_metadata_mask, + 0x00000818, 0x0070); -static const u32 ipa_reg_endp_init_mode_fmask[] = { +static const u32 reg_endp_init_mode_fmask[] = { [ENDP_MODE] = GENMASK(2, 0), [DCPH_ENABLE] = BIT(3), [DEST_PIPE_INDEX] = GENMASK(8, 4), @@ -335,9 +335,9 @@ static const u32 ipa_reg_endp_init_mode_fmask[] = { /* Bit 31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_MODE, endp_init_mode, 0x00000820, 0x0070); -static const u32 ipa_reg_endp_init_aggr_fmask[] = { +static const u32 reg_endp_init_aggr_fmask[] = { [AGGR_EN] = GENMASK(1, 0), [AGGR_TYPE] = GENMASK(4, 2), [BYTE_LIMIT] = GENMASK(10, 5), @@ -352,27 +352,27 @@ static const u32 ipa_reg_endp_init_aggr_fmask[] = { /* Bits 28-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_AGGR, endp_init_aggr, 0x00000824, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_en_fmask[] = { +static const u32 reg_endp_init_hol_block_en_fmask[] = { [HOL_BLOCK_EN] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, - 0x0000082c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_EN, endp_init_hol_block_en, + 0x0000082c, 0x0070); -static const u32 ipa_reg_endp_init_hol_block_timer_fmask[] = { +static const u32 reg_endp_init_hol_block_timer_fmask[] = { [TIMER_LIMIT] = GENMASK(4, 0), /* Bits 5-7 reserved */ [TIMER_GRAN_SEL] = BIT(8), /* Bits 9-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, - 0x00000830, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_HOL_BLOCK_TIMER, endp_init_hol_block_timer, + 0x00000830, 0x0070); -static const u32 ipa_reg_endp_init_deaggr_fmask[] = { +static const u32 reg_endp_init_deaggr_fmask[] = { [DEAGGR_HDR_LEN] = GENMASK(5, 0), [SYSPIPE_ERR_DETECTION] = BIT(6), [PACKET_OFFSET_VALID] = BIT(7), @@ -382,24 +382,23 @@ static const u32 ipa_reg_endp_init_deaggr_fmask[] = { [MAX_PACKET_LEN] = GENMASK(31, 16), }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_DEAGGR, endp_init_deaggr, 0x00000834, 0x0070); -static const u32 ipa_reg_endp_init_rsrc_grp_fmask[] = { +static const u32 reg_endp_init_rsrc_grp_fmask[] = { [ENDP_RSRC_GRP] = GENMASK(1, 0), /* Bits 2-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, - 0x00000838, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_RSRC_GRP, endp_init_rsrc_grp, 0x00000838, 0x0070); -static const u32 ipa_reg_endp_init_seq_fmask[] = { +static const u32 reg_endp_init_seq_fmask[] = { [SEQ_TYPE] = GENMASK(7, 0), /* Bits 8-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); +REG_STRIDE_FIELDS(ENDP_INIT_SEQ, endp_init_seq, 0x0000083c, 0x0070); -static const u32 ipa_reg_endp_status_fmask[] = { +static const u32 reg_endp_status_fmask[] = { [STATUS_EN] = BIT(0), [STATUS_ENDP] = GENMASK(5, 1), /* Bits 6-8 reserved */ @@ -407,9 +406,9 @@ static const u32 ipa_reg_endp_status_fmask[] = { /* Bits 10-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); +REG_STRIDE_FIELDS(ENDP_STATUS, endp_status, 0x00000840, 0x0070); -static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { +static const u32 reg_endp_filter_router_hsh_cfg_fmask[] = { [FILTER_HASH_MSK_SRC_ID] = BIT(0), [FILTER_HASH_MSK_SRC_IP] = BIT(1), [FILTER_HASH_MSK_DST_IP] = BIT(2), @@ -430,83 +429,83 @@ static const u32 ipa_reg_endp_filter_router_hsh_cfg_fmask[] = { /* Bits 23-31 reserved */ }; -IPA_REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, - 0x0000085c, 0x0070); +REG_STRIDE_FIELDS(ENDP_FILTER_ROUTER_HSH_CFG, endp_filter_router_hsh_cfg, + 0x0000085c, 0x0070); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_STTS, ipa_irq_stts, 0x00004008 + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_EN, ipa_irq_en, 0x0000400c + 0x1000 * GSI_EE_AP); /* Valid bits defined by enum ipa_irq_id; only used for GSI_EE_AP */ -IPA_REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP); +REG(IPA_IRQ_CLR, ipa_irq_clr, 0x00004010 + 0x1000 * GSI_EE_AP); -static const u32 ipa_reg_ipa_irq_uc_fmask[] = { +static const u32 reg_ipa_irq_uc_fmask[] = { [UC_INTR] = BIT(0), /* Bits 1-31 reserved */ }; -IPA_REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP); +REG_FIELDS(IPA_IRQ_UC, ipa_irq_uc, 0x0000401c + 0x1000 * GSI_EE_AP); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, - 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_INFO, irq_suspend_info, + 0x00004030 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, - 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004); +REG_STRIDE(IRQ_SUSPEND_EN, irq_suspend_en, + 0x00004034 + 0x1000 * GSI_EE_AP, 0x0004); /* Valid bits defined by ipa->available */ -IPA_REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, - 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004); - -static const struct ipa_reg *ipa_reg_array[] = { - [COMP_CFG] = &ipa_reg_comp_cfg, - [CLKON_CFG] = &ipa_reg_clkon_cfg, - [ROUTE] = &ipa_reg_route, - [SHARED_MEM_SIZE] = &ipa_reg_shared_mem_size, - [QSB_MAX_WRITES] = &ipa_reg_qsb_max_writes, - [QSB_MAX_READS] = &ipa_reg_qsb_max_reads, - [FILT_ROUT_HASH_EN] = &ipa_reg_filt_rout_hash_en, - [FILT_ROUT_HASH_FLUSH] = &ipa_reg_filt_rout_hash_flush, - [STATE_AGGR_ACTIVE] = &ipa_reg_state_aggr_active, - [LOCAL_PKT_PROC_CNTXT] = &ipa_reg_local_pkt_proc_cntxt, - [AGGR_FORCE_CLOSE] = &ipa_reg_aggr_force_close, - [IPA_TX_CFG] = &ipa_reg_ipa_tx_cfg, - [FLAVOR_0] = &ipa_reg_flavor_0, - [IDLE_INDICATION_CFG] = &ipa_reg_idle_indication_cfg, - [QTIME_TIMESTAMP_CFG] = &ipa_reg_qtime_timestamp_cfg, - [TIMERS_XO_CLK_DIV_CFG] = &ipa_reg_timers_xo_clk_div_cfg, - [TIMERS_PULSE_GRAN_CFG] = &ipa_reg_timers_pulse_gran_cfg, - [SRC_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_01_rsrc_type, - [SRC_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_src_rsrc_grp_23_rsrc_type, - [DST_RSRC_GRP_01_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_01_rsrc_type, - [DST_RSRC_GRP_23_RSRC_TYPE] = &ipa_reg_dst_rsrc_grp_23_rsrc_type, - [ENDP_INIT_CFG] = &ipa_reg_endp_init_cfg, - [ENDP_INIT_NAT] = &ipa_reg_endp_init_nat, - [ENDP_INIT_HDR] = &ipa_reg_endp_init_hdr, - [ENDP_INIT_HDR_EXT] = &ipa_reg_endp_init_hdr_ext, - [ENDP_INIT_HDR_METADATA_MASK] = &ipa_reg_endp_init_hdr_metadata_mask, - [ENDP_INIT_MODE] = &ipa_reg_endp_init_mode, - [ENDP_INIT_AGGR] = &ipa_reg_endp_init_aggr, - [ENDP_INIT_HOL_BLOCK_EN] = &ipa_reg_endp_init_hol_block_en, - [ENDP_INIT_HOL_BLOCK_TIMER] = &ipa_reg_endp_init_hol_block_timer, - [ENDP_INIT_DEAGGR] = &ipa_reg_endp_init_deaggr, - [ENDP_INIT_RSRC_GRP] = &ipa_reg_endp_init_rsrc_grp, - [ENDP_INIT_SEQ] = &ipa_reg_endp_init_seq, - [ENDP_STATUS] = &ipa_reg_endp_status, - [ENDP_FILTER_ROUTER_HSH_CFG] = &ipa_reg_endp_filter_router_hsh_cfg, - [IPA_IRQ_STTS] = &ipa_reg_ipa_irq_stts, - [IPA_IRQ_EN] = &ipa_reg_ipa_irq_en, - [IPA_IRQ_CLR] = &ipa_reg_ipa_irq_clr, - [IPA_IRQ_UC] = &ipa_reg_ipa_irq_uc, - [IRQ_SUSPEND_INFO] = &ipa_reg_irq_suspend_info, - [IRQ_SUSPEND_EN] = &ipa_reg_irq_suspend_en, - [IRQ_SUSPEND_CLR] = &ipa_reg_irq_suspend_clr, -}; - -const struct ipa_regs ipa_regs_v4_9 = { - .reg_count = ARRAY_SIZE(ipa_reg_array), - .reg = ipa_reg_array, +REG_STRIDE(IRQ_SUSPEND_CLR, irq_suspend_clr, + 0x00004038 + 0x1000 * GSI_EE_AP, 0x0004); + +static const struct reg *reg_array[] = { + [COMP_CFG] = ®_comp_cfg, + [CLKON_CFG] = ®_clkon_cfg, + [ROUTE] = ®_route, + [SHARED_MEM_SIZE] = ®_shared_mem_size, + [QSB_MAX_WRITES] = ®_qsb_max_writes, + [QSB_MAX_READS] = ®_qsb_max_reads, + [FILT_ROUT_HASH_EN] = ®_filt_rout_hash_en, + [FILT_ROUT_HASH_FLUSH] = ®_filt_rout_hash_flush, + [STATE_AGGR_ACTIVE] = ®_state_aggr_active, + [LOCAL_PKT_PROC_CNTXT] = ®_local_pkt_proc_cntxt, + [AGGR_FORCE_CLOSE] = ®_aggr_force_close, + [IPA_TX_CFG] = ®_ipa_tx_cfg, + [FLAVOR_0] = ®_flavor_0, + [IDLE_INDICATION_CFG] = ®_idle_indication_cfg, + [QTIME_TIMESTAMP_CFG] = ®_qtime_timestamp_cfg, + [TIMERS_XO_CLK_DIV_CFG] = ®_timers_xo_clk_div_cfg, + [TIMERS_PULSE_GRAN_CFG] = ®_timers_pulse_gran_cfg, + [SRC_RSRC_GRP_01_RSRC_TYPE] = ®_src_rsrc_grp_01_rsrc_type, + [SRC_RSRC_GRP_23_RSRC_TYPE] = ®_src_rsrc_grp_23_rsrc_type, + [DST_RSRC_GRP_01_RSRC_TYPE] = ®_dst_rsrc_grp_01_rsrc_type, + [DST_RSRC_GRP_23_RSRC_TYPE] = ®_dst_rsrc_grp_23_rsrc_type, + [ENDP_INIT_CFG] = ®_endp_init_cfg, + [ENDP_INIT_NAT] = ®_endp_init_nat, + [ENDP_INIT_HDR] = ®_endp_init_hdr, + [ENDP_INIT_HDR_EXT] = ®_endp_init_hdr_ext, + [ENDP_INIT_HDR_METADATA_MASK] = ®_endp_init_hdr_metadata_mask, + [ENDP_INIT_MODE] = ®_endp_init_mode, + [ENDP_INIT_AGGR] = ®_endp_init_aggr, + [ENDP_INIT_HOL_BLOCK_EN] = ®_endp_init_hol_block_en, + [ENDP_INIT_HOL_BLOCK_TIMER] = ®_endp_init_hol_block_timer, + [ENDP_INIT_DEAGGR] = ®_endp_init_deaggr, + [ENDP_INIT_RSRC_GRP] = ®_endp_init_rsrc_grp, + [ENDP_INIT_SEQ] = ®_endp_init_seq, + [ENDP_STATUS] = ®_endp_status, + [ENDP_FILTER_ROUTER_HSH_CFG] = ®_endp_filter_router_hsh_cfg, + [IPA_IRQ_STTS] = ®_ipa_irq_stts, + [IPA_IRQ_EN] = ®_ipa_irq_en, + [IPA_IRQ_CLR] = ®_ipa_irq_clr, + [IPA_IRQ_UC] = ®_ipa_irq_uc, + [IRQ_SUSPEND_INFO] = ®_irq_suspend_info, + [IRQ_SUSPEND_EN] = ®_irq_suspend_en, + [IRQ_SUSPEND_CLR] = ®_irq_suspend_clr, +}; + +const struct regs ipa_regs_v4_9 = { + .reg_count = ARRAY_SIZE(reg_array), + .reg = reg_array, }; diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c index 6db6a75ff9b9..35fa1ca98671 100644 --- a/drivers/net/netdevsim/netdev.c +++ b/drivers/net/netdevsim/netdev.c @@ -286,6 +286,7 @@ static void nsim_setup(struct net_device *dev) NETIF_F_TSO; dev->hw_features |= NETIF_F_HW_TC; dev->max_mtu = ETH_MAX_MTU; + dev->xdp_features = NETDEV_XDP_ACT_HW_OFFLOAD; } static int nsim_init_netdevsim(struct netdevsim *ns) diff --git a/drivers/net/pcs/pcs-rzn1-miic.c b/drivers/net/pcs/pcs-rzn1-miic.c index c1424119e821..323bec5e57f8 100644 --- a/drivers/net/pcs/pcs-rzn1-miic.c +++ b/drivers/net/pcs/pcs-rzn1-miic.c @@ -121,15 +121,11 @@ static const char *index_to_string[MIIC_MODCTRL_CONF_CONV_NUM] = { * struct miic - MII converter structure * @base: base address of the MII converter * @dev: Device associated to the MII converter - * @clks: Clocks used for this device - * @nclk: Number of clocks * @lock: Lock used for read-modify-write access */ struct miic { void __iomem *base; struct device *dev; - struct clk_bulk_data *clks; - int nclk; spinlock_t lock; }; @@ -232,7 +228,7 @@ static int miic_config(struct phylink_pcs *pcs, unsigned int mode, } miic_reg_rmw(miic, MIIC_CONVCTRL(port), mask, val); - miic_converter_enable(miic_port->miic, miic_port->port, 1); + miic_converter_enable(miic, miic_port->port, 1); return 0; } diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c index 5e41658b1e2f..a6015cd03bff 100644 --- a/drivers/net/phy/meson-gxl.c +++ b/drivers/net/phy/meson-gxl.c @@ -261,6 +261,8 @@ static struct phy_driver meson_gxl_phy[] = { .handle_interrupt = meson_gxl_handle_interrupt, .suspend = genphy_suspend, .resume = genphy_resume, + .read_mmd = genphy_read_mmd_unsupported, + .write_mmd = genphy_write_mmd_unsupported, }, { PHY_ID_MATCH_EXACT(0x01803301), .name = "Meson G12A Internal PHY", diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c index 01677c28e407..727de4f4a14d 100644 --- a/drivers/net/phy/micrel.c +++ b/drivers/net/phy/micrel.c @@ -378,6 +378,8 @@ static const struct kszphy_type lan8841_type = { .disable_dll_tx_bit = BIT(14), .disable_dll_rx_bit = BIT(14), .disable_dll_mask = BIT_MASK(14), + .cable_diag_reg = LAN8814_CABLE_DIAG, + .pair_mask = LAN8814_WIRE_PAIR_MASK, }; static int kszphy_extended_write(struct phy_device *phydev, @@ -3520,6 +3522,7 @@ static struct phy_driver ksphy_driver[] = { .phy_id = PHY_ID_LAN8841, .phy_id_mask = MICREL_PHY_ID_MASK, .name = "Microchip LAN8841 Gigabit PHY", + .flags = PHY_POLL_CABLE_TEST, .driver_data = &lan8841_type, .config_init = lan8841_config_init, .probe = lan8841_probe, @@ -3531,6 +3534,8 @@ static struct phy_driver ksphy_driver[] = { .get_stats = kszphy_get_stats, .suspend = genphy_suspend, .resume = genphy_resume, + .cable_test_start = lan8814_cable_test_start, + .cable_test_get_status = ksz886x_cable_test_get_status, }, { .phy_id = PHY_ID_KSZ9131, .phy_id_mask = MICREL_PHY_ID_MASK, diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c index 319790221d7f..ea8fcce5b2d9 100644 --- a/drivers/net/phy/phylink.c +++ b/drivers/net/phy/phylink.c @@ -1816,10 +1816,9 @@ int phylink_fwnode_phy_connect(struct phylink *pl, ret = phy_attach_direct(pl->netdev, phy_dev, flags, pl->link_interface); - if (ret) { - phy_device_free(phy_dev); + phy_device_free(phy_dev); + if (ret) return ret; - } ret = phylink_bringup_phy(pl, phy_dev, pl->link_config.interface); if (ret) diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 745131b2d6db..ad653b32b2f0 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -1401,6 +1401,11 @@ static void tun_net_initialize(struct net_device *dev) eth_hw_addr_random(dev); + /* Currently tun does not support XDP, only tap does. */ + dev->xdp_features = NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; + break; } diff --git a/drivers/net/usb/plusb.c b/drivers/net/usb/plusb.c index 2c82fbcaab22..7a2b0094de51 100644 --- a/drivers/net/usb/plusb.c +++ b/drivers/net/usb/plusb.c @@ -57,9 +57,7 @@ static inline int pl_vendor_req(struct usbnet *dev, u8 req, u8 val, u8 index) { - return usbnet_read_cmd(dev, req, - USB_DIR_IN | USB_TYPE_VENDOR | - USB_RECIP_DEVICE, + return usbnet_write_cmd(dev, req, USB_TYPE_VENDOR | USB_RECIP_DEVICE, val, index, NULL, 0); } diff --git a/drivers/net/veth.c b/drivers/net/veth.c index ba3e05832843..1bb54de7124d 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -1686,6 +1686,10 @@ static void veth_setup(struct net_device *dev) dev->hw_enc_features = VETH_FEATURES; dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE; netif_set_tso_max_size(dev, GSO_MAX_SIZE); + + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG; } /* diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 5256fdd55547..fb5e68ed3ec2 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -3283,7 +3283,10 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, if (i == 0 && !old_prog) virtnet_clear_guest_offloads(vi); } + if (!old_prog) + xdp_features_set_redirect_target(dev, true); } else { + xdp_features_clear_redirect_target(dev); vi->xdp_enabled = false; } @@ -3919,6 +3922,7 @@ static int virtnet_probe(struct virtio_device *vdev) dev->hw_features |= NETIF_F_GRO_HW; dev->vlan_features = dev->features; + dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT; /* MTU range: 68 - 65535 */ dev->min_mtu = MIN_MTU; @@ -3947,8 +3951,10 @@ static int virtnet_probe(struct virtio_device *vdev) INIT_WORK(&vi->config_work, virtnet_config_changed_work); spin_lock_init(&vi->refill_lock); - if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) + if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) { vi->mergeable_rx_bufs = true; + dev->xdp_features |= NETDEV_XDP_ACT_RX_SG; + } if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { vi->rx_usecs = 0; diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c index 12b074286df9..47d54d8ea59d 100644 --- a/drivers/net/xen-netfront.c +++ b/drivers/net/xen-netfront.c @@ -1741,6 +1741,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev) * negotiate with the backend regarding supported features. */ netdev->features |= netdev->hw_features; + netdev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT; netdev->ethtool_ops = &xennet_ethtool_ops; netdev->min_mtu = ETH_MIN_MTU; diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c index 4424f53a8a0a..b57630d1d3b8 100644 --- a/drivers/nvme/host/auth.c +++ b/drivers/nvme/host/auth.c @@ -45,6 +45,8 @@ struct nvme_dhchap_queue_context { int sess_key_len; }; +struct workqueue_struct *nvme_auth_wq; + #define nvme_auth_flags_from_qid(qid) \ (qid == 0) ? 0 : BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED #define nvme_auth_queue_from_qid(ctrl, qid) \ @@ -866,7 +868,7 @@ int nvme_auth_negotiate(struct nvme_ctrl *ctrl, int qid) chap = &ctrl->dhchap_ctxs[qid]; cancel_work_sync(&chap->auth_work); - queue_work(nvme_wq, &chap->auth_work); + queue_work(nvme_auth_wq, &chap->auth_work); return 0; } EXPORT_SYMBOL_GPL(nvme_auth_negotiate); @@ -1008,10 +1010,15 @@ EXPORT_SYMBOL_GPL(nvme_auth_free); int __init nvme_init_auth(void) { + nvme_auth_wq = alloc_workqueue("nvme-auth-wq", + WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_SYSFS, 0); + if (!nvme_auth_wq) + return -ENOMEM; + nvme_chap_buf_cache = kmem_cache_create("nvme-chap-buf-cache", CHAP_BUF_SIZE, 0, SLAB_HWCACHE_ALIGN, NULL); if (!nvme_chap_buf_cache) - return -ENOMEM; + goto err_destroy_workqueue; nvme_chap_buf_pool = mempool_create(16, mempool_alloc_slab, mempool_free_slab, nvme_chap_buf_cache); @@ -1021,6 +1028,8 @@ int __init nvme_init_auth(void) return 0; err_destroy_chap_buf_cache: kmem_cache_destroy(nvme_chap_buf_cache); +err_destroy_workqueue: + destroy_workqueue(nvme_auth_wq); return -ENOMEM; } @@ -1028,4 +1037,5 @@ void __exit nvme_exit_auth(void) { mempool_destroy(nvme_chap_buf_pool); kmem_cache_destroy(nvme_chap_buf_cache); + destroy_workqueue(nvme_auth_wq); } diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 505e16f20e57..8b6421141162 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -4921,7 +4921,9 @@ out_cleanup_admin_q: blk_mq_destroy_queue(ctrl->admin_q); blk_put_queue(ctrl->admin_q); out_free_tagset: - blk_mq_free_tag_set(ctrl->admin_tagset); + blk_mq_free_tag_set(set); + ctrl->admin_q = NULL; + ctrl->fabrics_q = NULL; return ret; } EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set); @@ -4983,6 +4985,7 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set, out_free_tag_set: blk_mq_free_tag_set(set); + ctrl->connect_q = NULL; return ret; } EXPORT_SYMBOL_GPL(nvme_alloc_io_tag_set); diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c index ab2627e17bb9..1ab6601fdd5c 100644 --- a/drivers/nvme/target/fc.c +++ b/drivers/nvme/target/fc.c @@ -1685,8 +1685,10 @@ nvmet_fc_ls_create_association(struct nvmet_fc_tgtport *tgtport, else { queue = nvmet_fc_alloc_target_queue(iod->assoc, 0, be16_to_cpu(rqst->assoc_cmd.sqsize)); - if (!queue) + if (!queue) { ret = VERR_QUEUE_ALLOC_FAIL; + nvmet_fc_tgt_a_put(iod->assoc); + } } } diff --git a/drivers/nvmem/brcm_nvram.c b/drivers/nvmem/brcm_nvram.c index 34130449f2d2..39aa27942f28 100644 --- a/drivers/nvmem/brcm_nvram.c +++ b/drivers/nvmem/brcm_nvram.c @@ -98,6 +98,9 @@ static int brcm_nvram_parse(struct brcm_nvram *priv) len = le32_to_cpu(header.len); data = kzalloc(len, GFP_KERNEL); + if (!data) + return -ENOMEM; + memcpy_fromio(data, priv->base, len); data[len - 1] = '\0'; diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c index 321d7d63e068..34ee9d36ee7b 100644 --- a/drivers/nvmem/core.c +++ b/drivers/nvmem/core.c @@ -770,31 +770,32 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) return ERR_PTR(rval); } - if (config->wp_gpio) - nvmem->wp_gpio = config->wp_gpio; - else if (!config->ignore_wp) + nvmem->id = rval; + + nvmem->dev.type = &nvmem_provider_type; + nvmem->dev.bus = &nvmem_bus_type; + nvmem->dev.parent = config->dev; + + device_initialize(&nvmem->dev); + + if (!config->ignore_wp) nvmem->wp_gpio = gpiod_get_optional(config->dev, "wp", GPIOD_OUT_HIGH); if (IS_ERR(nvmem->wp_gpio)) { - ida_free(&nvmem_ida, nvmem->id); rval = PTR_ERR(nvmem->wp_gpio); - kfree(nvmem); - return ERR_PTR(rval); + nvmem->wp_gpio = NULL; + goto err_put_device; } kref_init(&nvmem->refcnt); INIT_LIST_HEAD(&nvmem->cells); - nvmem->id = rval; nvmem->owner = config->owner; if (!nvmem->owner && config->dev->driver) nvmem->owner = config->dev->driver->owner; nvmem->stride = config->stride ?: 1; nvmem->word_size = config->word_size ?: 1; nvmem->size = config->size; - nvmem->dev.type = &nvmem_provider_type; - nvmem->dev.bus = &nvmem_bus_type; - nvmem->dev.parent = config->dev; nvmem->root_only = config->root_only; nvmem->priv = config->priv; nvmem->type = config->type; @@ -822,11 +823,8 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) break; } - if (rval) { - ida_free(&nvmem_ida, nvmem->id); - kfree(nvmem); - return ERR_PTR(rval); - } + if (rval) + goto err_put_device; nvmem->read_only = device_property_present(config->dev, "read-only") || config->read_only || !nvmem->reg_write; @@ -835,28 +833,22 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) nvmem->dev.groups = nvmem_dev_groups; #endif - dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); - - rval = device_register(&nvmem->dev); - if (rval) - goto err_put_device; - if (nvmem->nkeepout) { rval = nvmem_validate_keepouts(nvmem); if (rval) - goto err_device_del; + goto err_put_device; } if (config->compat) { rval = nvmem_sysfs_setup_compat(nvmem, config); if (rval) - goto err_device_del; + goto err_put_device; } if (config->cells) { rval = nvmem_add_cells(nvmem, config->cells, config->ncells); if (rval) - goto err_teardown_compat; + goto err_remove_cells; } rval = nvmem_add_cells_from_table(nvmem); @@ -867,17 +859,20 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) if (rval) goto err_remove_cells; + dev_dbg(&nvmem->dev, "Registering nvmem device %s\n", config->name); + + rval = device_add(&nvmem->dev); + if (rval) + goto err_remove_cells; + blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); return nvmem; err_remove_cells: nvmem_device_remove_all_cells(nvmem); -err_teardown_compat: if (config->compat) nvmem_sysfs_remove_compat(nvmem, config); -err_device_del: - device_del(&nvmem->dev); err_put_device: put_device(&nvmem->dev); @@ -1242,16 +1237,21 @@ struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id) if (!cell_np) return ERR_PTR(-ENOENT); - nvmem_np = of_get_next_parent(cell_np); - if (!nvmem_np) + nvmem_np = of_get_parent(cell_np); + if (!nvmem_np) { + of_node_put(cell_np); return ERR_PTR(-EINVAL); + } nvmem = __nvmem_device_get(nvmem_np, device_match_of_node); of_node_put(nvmem_np); - if (IS_ERR(nvmem)) + if (IS_ERR(nvmem)) { + of_node_put(cell_np); return ERR_CAST(nvmem); + } cell_entry = nvmem_find_cell_entry_by_node(nvmem, cell_np); + of_node_put(cell_np); if (!cell_entry) { __nvmem_device_put(nvmem); return ERR_PTR(-ENOENT); diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c index 4fcb63507ecd..8499892044b7 100644 --- a/drivers/nvmem/qcom-spmi-sdam.c +++ b/drivers/nvmem/qcom-spmi-sdam.c @@ -166,6 +166,7 @@ static const struct of_device_id sdam_match_table[] = { { .compatible = "qcom,spmi-sdam" }, {}, }; +MODULE_DEVICE_TABLE(of, sdam_match_table); static struct platform_driver sdam_driver = { .driver = { diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c index 5750e1f4bcdb..92dfe4cb10e3 100644 --- a/drivers/nvmem/sunxi_sid.c +++ b/drivers/nvmem/sunxi_sid.c @@ -41,8 +41,21 @@ static int sunxi_sid_read(void *context, unsigned int offset, void *val, size_t bytes) { struct sunxi_sid *sid = context; + u32 word; + + /* .stride = 4 so offset is guaranteed to be aligned */ + __ioread32_copy(val, sid->base + sid->value_offset + offset, bytes / 4); - memcpy_fromio(val, sid->base + sid->value_offset + offset, bytes); + val += round_down(bytes, 4); + offset += round_down(bytes, 4); + bytes = bytes % 4; + + if (!bytes) + return 0; + + /* Handle any trailing bytes */ + word = readl_relaxed(sid->base + sid->value_offset + offset); + memcpy(val, &word, bytes); return 0; } diff --git a/drivers/of/address.c b/drivers/of/address.c index c34ac33b7338..67763e5b8c0e 100644 --- a/drivers/of/address.c +++ b/drivers/of/address.c @@ -965,8 +965,19 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) } of_dma_range_parser_init(&parser, node); - for_each_of_range(&parser, &range) + for_each_of_range(&parser, &range) { + if (range.cpu_addr == OF_BAD_ADDR) { + pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n", + range.bus_addr, node); + continue; + } num_ranges++; + } + + if (!num_ranges) { + ret = -EINVAL; + goto out; + } r = kcalloc(num_ranges + 1, sizeof(*r), GFP_KERNEL); if (!r) { @@ -975,18 +986,16 @@ int of_dma_get_range(struct device_node *np, const struct bus_dma_region **map) } /* - * Record all info in the generic DMA ranges array for struct device. + * Record all info in the generic DMA ranges array for struct device, + * returning an error if we don't find any parsable ranges. */ *map = r; of_dma_range_parser_init(&parser, node); for_each_of_range(&parser, &range) { pr_debug("dma_addr(%llx) cpu_addr(%llx) size(%llx)\n", range.bus_addr, range.cpu_addr, range.size); - if (range.cpu_addr == OF_BAD_ADDR) { - pr_err("translation of DMA address(%llx) to CPU address failed node(%pOF)\n", - range.bus_addr, node); + if (range.cpu_addr == OF_BAD_ADDR) continue; - } r->cpu_start = range.cpu_addr; r->dma_start = range.bus_addr; r->size = range.size; diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c index f08b25195ae7..d1a68b6d03b3 100644 --- a/drivers/of/fdt.c +++ b/drivers/of/fdt.c @@ -26,7 +26,6 @@ #include <linux/serial_core.h> #include <linux/sysfs.h> #include <linux/random.h> -#include <linux/kmemleak.h> #include <asm/setup.h> /* for COMMAND_LINE_SIZE */ #include <asm/page.h> @@ -525,12 +524,9 @@ static int __init __reserved_mem_reserve_reg(unsigned long node, size = dt_mem_next_cell(dt_root_size_cells, &prop); if (size && - early_init_dt_reserve_memory(base, size, nomap) == 0) { + early_init_dt_reserve_memory(base, size, nomap) == 0) pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n", uname, &base, (unsigned long)(size / SZ_1M)); - if (!nomap) - kmemleak_alloc_phys(base, size, 0); - } else pr_err("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n", uname, &base, (unsigned long)(size / SZ_1M)); diff --git a/drivers/of/platform.c b/drivers/of/platform.c index 81c8c227ab6b..b3878a98d27f 100644 --- a/drivers/of/platform.c +++ b/drivers/of/platform.c @@ -525,6 +525,7 @@ static int __init of_platform_default_populate_init(void) if (IS_ENABLED(CONFIG_PPC)) { struct device_node *boot_display = NULL; struct platform_device *dev; + int display_number = 0; int ret; /* Check if we have a MacOS display without a node spec */ @@ -555,16 +556,23 @@ static int __init of_platform_default_populate_init(void) if (!of_get_property(node, "linux,opened", NULL) || !of_get_property(node, "linux,boot-display", NULL)) continue; - dev = of_platform_device_create(node, "of-display", NULL); + dev = of_platform_device_create(node, "of-display.0", NULL); + of_node_put(node); if (WARN_ON(!dev)) return -ENOMEM; boot_display = node; + display_number++; break; } for_each_node_by_type(node, "display") { + char buf[14]; + const char *of_display_format = "of-display.%d"; + if (!of_get_property(node, "linux,opened", NULL) || node == boot_display) continue; - of_platform_device_create(node, "of-display", NULL); + ret = snprintf(buf, sizeof(buf), of_display_format, display_number++); + if (ret < sizeof(buf)) + of_platform_device_create(node, buf, NULL); } } else { diff --git a/drivers/parisc/pdc_stable.c b/drivers/parisc/pdc_stable.c index d6af5726ddf3..2a18f7ba2398 100644 --- a/drivers/parisc/pdc_stable.c +++ b/drivers/parisc/pdc_stable.c @@ -274,8 +274,7 @@ pdcspath_hwpath_write(struct pdcspath_entry *entry, const char *buf, size_t coun /* We'll use a local copy of buf */ count = min_t(size_t, count, sizeof(in)-1); - strncpy(in, buf, count); - in[count] = '\0'; + strscpy(in, buf, count + 1); /* Let's clean up the target. 0xff is a blank pattern */ memset(&hwpath, 0xff, sizeof(hwpath)); @@ -388,8 +387,7 @@ pdcspath_layer_write(struct pdcspath_entry *entry, const char *buf, size_t count /* We'll use a local copy of buf */ count = min_t(size_t, count, sizeof(in)-1); - strncpy(in, buf, count); - in[count] = '\0'; + strscpy(in, buf, count + 1); /* Let's clean up the target. 0 is a blank pattern */ memset(&layers, 0, sizeof(layers)); @@ -756,8 +754,7 @@ static ssize_t pdcs_auto_write(struct kobject *kobj, /* We'll use a local copy of buf */ count = min_t(size_t, count, sizeof(in)-1); - strncpy(in, buf, count); - in[count] = '\0'; + strscpy(in, buf, count + 1); /* Current flags are stored in primary boot path entry */ pathentry = &pdcspath_entry_primary; diff --git a/drivers/rtc/rtc-efi.c b/drivers/rtc/rtc-efi.c index e991cccdb6e9..1e8bc6cc1e12 100644 --- a/drivers/rtc/rtc-efi.c +++ b/drivers/rtc/rtc-efi.c @@ -188,9 +188,10 @@ static int efi_set_time(struct device *dev, struct rtc_time *tm) static int efi_procfs(struct device *dev, struct seq_file *seq) { - efi_time_t eft, alm; - efi_time_cap_t cap; - efi_bool_t enabled, pending; + efi_time_t eft, alm; + efi_time_cap_t cap; + efi_bool_t enabled, pending; + struct rtc_device *rtc = dev_get_drvdata(dev); memset(&eft, 0, sizeof(eft)); memset(&alm, 0, sizeof(alm)); @@ -213,23 +214,25 @@ static int efi_procfs(struct device *dev, struct seq_file *seq) /* XXX fixme: convert to string? */ seq_printf(seq, "Timezone\t: %u\n", eft.timezone); - seq_printf(seq, - "Alarm Time\t: %u:%u:%u.%09u\n" - "Alarm Date\t: %u-%u-%u\n" - "Alarm Daylight\t: %u\n" - "Enabled\t\t: %s\n" - "Pending\t\t: %s\n", - alm.hour, alm.minute, alm.second, alm.nanosecond, - alm.year, alm.month, alm.day, - alm.daylight, - enabled == 1 ? "yes" : "no", - pending == 1 ? "yes" : "no"); - - if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) - seq_puts(seq, "Timezone\t: unspecified\n"); - else - /* XXX fixme: convert to string? */ - seq_printf(seq, "Timezone\t: %u\n", alm.timezone); + if (test_bit(RTC_FEATURE_ALARM, rtc->features)) { + seq_printf(seq, + "Alarm Time\t: %u:%u:%u.%09u\n" + "Alarm Date\t: %u-%u-%u\n" + "Alarm Daylight\t: %u\n" + "Enabled\t\t: %s\n" + "Pending\t\t: %s\n", + alm.hour, alm.minute, alm.second, alm.nanosecond, + alm.year, alm.month, alm.day, + alm.daylight, + enabled == 1 ? "yes" : "no", + pending == 1 ? "yes" : "no"); + + if (eft.timezone == EFI_UNSPECIFIED_TIMEZONE) + seq_puts(seq, "Timezone\t: unspecified\n"); + else + /* XXX fixme: convert to string? */ + seq_printf(seq, "Timezone\t: %u\n", alm.timezone); + } /* * now prints the capabilities @@ -269,7 +272,10 @@ static int __init efi_rtc_probe(struct platform_device *dev) rtc->ops = &efi_rtc_ops; clear_bit(RTC_FEATURE_UPDATE_INTERRUPT, rtc->features); - set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); + if (efi_rt_services_supported(EFI_RT_SUPPORTED_WAKEUP_SERVICES)) + set_bit(RTC_FEATURE_ALARM_WAKEUP_ONLY, rtc->features); + else + clear_bit(RTC_FEATURE_ALARM, rtc->features); device_init_wakeup(&dev->dev, true); diff --git a/drivers/rtc/rtc-sunplus.c b/drivers/rtc/rtc-sunplus.c index e8e2ab1103fc..4b578e4d44f6 100644 --- a/drivers/rtc/rtc-sunplus.c +++ b/drivers/rtc/rtc-sunplus.c @@ -240,8 +240,8 @@ static int sp_rtc_probe(struct platform_device *plat_dev) if (IS_ERR(sp_rtc->reg_base)) return dev_err_probe(&plat_dev->dev, PTR_ERR(sp_rtc->reg_base), "%s devm_ioremap_resource fail\n", RTC_REG_NAME); - dev_dbg(&plat_dev->dev, "res = 0x%x, reg_base = 0x%lx\n", - sp_rtc->res->start, (unsigned long)sp_rtc->reg_base); + dev_dbg(&plat_dev->dev, "res = %pR, reg_base = %p\n", + sp_rtc->res, sp_rtc->reg_base); sp_rtc->irq = platform_get_irq(plat_dev, 0); if (sp_rtc->irq < 0) diff --git a/drivers/s390/net/ctcm_fsms.c b/drivers/s390/net/ctcm_fsms.c index dfb84bb03d32..90ec477386a8 100644 --- a/drivers/s390/net/ctcm_fsms.c +++ b/drivers/s390/net/ctcm_fsms.c @@ -370,7 +370,7 @@ static void chx_rx(fsm_instance *fi, int event, void *arg) CTCM_FUNTAIL, dev->name, len); priv->stats.rx_dropped++; priv->stats.rx_length_errors++; - goto again; + goto again; } if (len > ch->max_bufsize) { CTCM_DBF_TEXT_(TRACE, CTC_DBF_NOTICE, @@ -378,7 +378,7 @@ static void chx_rx(fsm_instance *fi, int event, void *arg) CTCM_FUNTAIL, dev->name, len, ch->max_bufsize); priv->stats.rx_dropped++; priv->stats.rx_length_errors++; - goto again; + goto again; } /* @@ -403,7 +403,7 @@ static void chx_rx(fsm_instance *fi, int event, void *arg) *((__u16 *)skb->data) = len; priv->stats.rx_dropped++; priv->stats.rx_length_errors++; - goto again; + goto again; } if (block_len > 2) { *((__u16 *)skb->data) = block_len - 2; @@ -1006,7 +1006,7 @@ static void ctcm_chx_txretry(fsm_instance *fi, int event, void *arg) use gptr as mpc indicator */ if (!(gptr && (fsm_getstate(gptr->fsm) != MPCG_STATE_READY))) ctcm_chx_restart(fi, event, arg); - goto done; + goto done; } CTCM_DBF_TEXT_(TRACE, CTC_DBF_DEBUG, @@ -1024,7 +1024,7 @@ static void ctcm_chx_txretry(fsm_instance *fi, int event, void *arg) CTCM_FUNTAIL, ch->id); fsm_event(priv->fsm, DEV_EVENT_TXDOWN, dev); ctcm_chx_restart(fi, event, arg); - goto done; + goto done; } fsm_addtimer(&ch->timer, 1000, CTC_EVENT_TIMER, ch); if (event == CTC_EVENT_TIMER) /* for TIMER not yet locked */ @@ -1251,12 +1251,12 @@ static void ctcmpc_chx_txdone(fsm_instance *fi, int event, void *arg) if ((ch->collect_len <= 0) || (grp->in_sweep != 0)) { spin_unlock(&ch->collect_lock); fsm_newstate(fi, CTC_STATE_TXIDLE); - goto done; + goto done; } if (ctcm_checkalloc_buffer(ch)) { spin_unlock(&ch->collect_lock); - goto done; + goto done; } ch->trans_skb->data = ch->trans_skb_data; skb_reset_tail_pointer(ch->trans_skb); @@ -1389,7 +1389,7 @@ static void ctcmpc_chx_rx(fsm_instance *fi, int event, void *arg) CTCM_DBF_TEXT_(MPC_ERROR, CTC_DBF_ERROR, "%s(%s): TRANS_SKB = NULL", CTCM_FUNTAIL, dev->name); - goto again; + goto again; } if (len < TH_HEADER_LENGTH) { @@ -1409,7 +1409,7 @@ static void ctcmpc_chx_rx(fsm_instance *fi, int event, void *arg) "%s(%s): skb allocation failed", CTCM_FUNTAIL, dev->name); fsm_event(priv->mpcg->fsm, MPCG_EVENT_INOP, dev); - goto again; + goto again; } switch (fsm_getstate(grp->fsm)) { case MPCG_STATE_RESET: @@ -1441,9 +1441,9 @@ again: skb_reset_tail_pointer(ch->trans_skb); ch->trans_skb->len = 0; ch->ccw[1].count = ch->max_bufsize; - if (do_debug_ccw) + if (do_debug_ccw) ctcmpc_dumpit((char *)&ch->ccw[0], - sizeof(struct ccw1) * 3); + sizeof(struct ccw1) * 3); dolock = !in_hardirq(); if (dolock) spin_lock_irqsave( @@ -1562,7 +1562,7 @@ void ctcmpc_chx_rxidle(fsm_instance *fi, int event, void *arg) if (rc != 0) { fsm_newstate(fi, CTC_STATE_RXINIT); ctcm_ccw_check_rc(ch, rc, "initial RX"); - goto done; + goto done; } break; default: @@ -1677,10 +1677,10 @@ static void ctcmpc_chx_attnbusy(fsm_instance *fsm, int event, void *arg) if (fsm_getstate(ch->fsm) == CH_XID0_INPROGRESS) { fsm_newstate(ch->fsm, CH_XID0_PENDING) ; fsm_deltimer(&grp->timer); - goto done; + goto done; } fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; case MPCG_STATE_XID2INITX: /* XID2 was received before ATTN Busy for second channel.Send yside xid for second channel. @@ -1768,7 +1768,7 @@ static void ctcmpc_chx_send_sweep(fsm_instance *fsm, int event, void *arg) /* give the previous IO time to complete */ fsm_addtimer(&wch->sweep_timer, 200, CTC_EVENT_RSWEEP_TIMER, wch); - goto done; + goto done; } skb = skb_dequeue(&wch->sweep_queue); @@ -1780,7 +1780,7 @@ static void ctcmpc_chx_send_sweep(fsm_instance *fsm, int event, void *arg) ctcm_clear_busy_do(dev); dev_kfree_skb_any(skb); fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; } else { refcount_inc(&skb->users); skb_queue_tail(&wch->io_queue, skb); diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c index bdfab9ea0046..28db69d91f17 100644 --- a/drivers/s390/net/ctcm_main.c +++ b/drivers/s390/net/ctcm_main.c @@ -494,7 +494,7 @@ static int ctcm_transmit_skb(struct channel *ch, struct sk_buff *skb) ch->collect_len += l; } spin_unlock_irqrestore(&ch->collect_lock, saveflags); - goto done; + goto done; } spin_unlock_irqrestore(&ch->collect_lock, saveflags); /* @@ -685,7 +685,7 @@ static int ctcmpc_transmit_skb(struct channel *ch, struct sk_buff *skb) ch->collect_len += skb->len; spin_unlock_irqrestore(&ch->collect_lock, saveflags); - goto done; + goto done; } /* @@ -885,7 +885,7 @@ static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) "%s(%s): NULL sk_buff passed", CTCM_FUNTAIL, dev->name); priv->stats.tx_dropped++; - goto done; + goto done; } if (skb_headroom(skb) < (TH_HEADER_LENGTH + PDU_HEADER_LENGTH)) { CTCM_DBF_TEXT_(MPC_TRACE, CTC_DBF_ERROR, @@ -908,7 +908,7 @@ static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) priv->stats.tx_errors++; priv->stats.tx_carrier_errors++; fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; } newskb->protocol = skb->protocol; skb_reserve(newskb, TH_HEADER_LENGTH + PDU_HEADER_LENGTH); @@ -931,7 +931,7 @@ static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) priv->stats.tx_dropped++; priv->stats.tx_errors++; priv->stats.tx_carrier_errors++; - goto done; + goto done; } if (ctcm_test_and_set_busy(dev)) { @@ -943,7 +943,7 @@ static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) priv->stats.tx_errors++; priv->stats.tx_carrier_errors++; fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; } netif_trans_update(dev); @@ -957,7 +957,7 @@ static netdev_tx_t ctcmpc_tx(struct sk_buff *skb, struct net_device *dev) priv->stats.tx_carrier_errors++; ctcm_clear_busy(dev); fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; } ctcm_clear_busy(dev); done: @@ -1421,7 +1421,7 @@ static int add_channel(struct ccw_device *cdev, enum ctcm_channel_types type, "%s (%s) already in list, using old entry", __func__, (*c)->id); - goto free_return; + goto free_return; } spin_lock_init(&ch->collect_lock); diff --git a/drivers/s390/net/ctcm_mpc.c b/drivers/s390/net/ctcm_mpc.c index 8ac213a55141..b8a226c6e1a9 100644 --- a/drivers/s390/net/ctcm_mpc.c +++ b/drivers/s390/net/ctcm_mpc.c @@ -481,7 +481,7 @@ void ctc_mpc_establish_connectivity(int port_num, grp->estconnfunc = NULL; } fsm_deltimer(&grp->timer); - goto done; + goto done; } if ((wch->in_mpcgroup) && (fsm_getstate(wch->fsm) == CH_XID0_PENDING)) @@ -495,7 +495,7 @@ void ctc_mpc_establish_connectivity(int port_num, grp->estconnfunc = NULL; } fsm_deltimer(&grp->timer); - goto done; + goto done; } break; case MPCG_STATE_XID0IOWAIT: @@ -896,8 +896,9 @@ void mpc_group_ready(unsigned long adev) grp->estconnfunc(grp->port_num, 0, grp->group_max_buflen); grp->estconnfunc = NULL; - } else if (grp->allochanfunc) + } else if (grp->allochanfunc) { grp->allochanfunc(grp->port_num, grp->group_max_buflen); + } grp->send_qllc_disc = 1; grp->changed_side = 0; @@ -1109,7 +1110,7 @@ static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb) priv->stats.rx_dropped++; priv->stats.rx_length_errors++; - goto done; + goto done; } skb_reset_mac_header(pskb); new_len = curr_pdu->pdu_offset; @@ -1132,7 +1133,7 @@ static void ctcmpc_unpack_skb(struct channel *ch, struct sk_buff *pskb) CTCM_FUNTAIL, dev->name); priv->stats.rx_dropped++; fsm_event(grp->fsm, MPCG_EVENT_INOP, dev); - goto done; + goto done; } skb_put_data(skb, pskb->data, new_len); @@ -1543,7 +1544,7 @@ static int mpc_validate_xid(struct mpcg_info *mpcginfo) CTCM_DBF_TEXT_(MPC_ERROR, CTC_DBF_ERROR, "%s(%s): xid = NULL", CTCM_FUNTAIL, ch->id); - goto done; + goto done; } CTCM_D3_DUMP((char *)xid, XID2_LENGTH); @@ -1556,7 +1557,7 @@ static int mpc_validate_xid(struct mpcg_info *mpcginfo) CTCM_DBF_TEXT_(MPC_ERROR, CTC_DBF_ERROR, "%s(%s): r/w channel pairing mismatch", CTCM_FUNTAIL, ch->id); - goto done; + goto done; } if (xid->xid2_dlc_type == XID2_READ_SIDE) { diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c index 8bd9fd51208c..1d5b207c2b9e 100644 --- a/drivers/s390/net/qeth_core_main.c +++ b/drivers/s390/net/qeth_core_main.c @@ -2801,9 +2801,11 @@ static void qeth_print_status_message(struct qeth_card *card) * of the level OSA sets the first character to zero * */ if (!card->info.mcl_level[0]) { - sprintf(card->info.mcl_level, "%02x%02x", - card->info.mcl_level[2], - card->info.mcl_level[3]); + scnprintf(card->info.mcl_level, + sizeof(card->info.mcl_level), + "%02x%02x", + card->info.mcl_level[2], + card->info.mcl_level[3]); break; } fallthrough; @@ -6090,7 +6092,7 @@ void qeth_dbf_longtext(debug_info_t *id, int level, char *fmt, ...) if (!debug_level_enabled(id, level)) return; va_start(args, fmt); - vsnprintf(dbf_txt_buf, sizeof(dbf_txt_buf), fmt, args); + vscnprintf(dbf_txt_buf, sizeof(dbf_txt_buf), fmt, args); va_end(args); debug_text_event(id, level, dbf_txt_buf); } @@ -6330,8 +6332,8 @@ static int qeth_core_probe_device(struct ccwgroup_device *gdev) goto err_dev; } - snprintf(dbf_name, sizeof(dbf_name), "qeth_card_%s", - dev_name(&gdev->dev)); + scnprintf(dbf_name, sizeof(dbf_name), "qeth_card_%s", + dev_name(&gdev->dev)); card->debug = qeth_get_dbf_entry(dbf_name); if (!card->debug) { rc = qeth_add_dbf_entry(card, dbf_name); diff --git a/drivers/s390/net/qeth_core_sys.c b/drivers/s390/net/qeth_core_sys.c index d1adc4b83193..eea93f8f106f 100644 --- a/drivers/s390/net/qeth_core_sys.c +++ b/drivers/s390/net/qeth_core_sys.c @@ -23,15 +23,15 @@ static ssize_t qeth_dev_state_show(struct device *dev, switch (card->state) { case CARD_STATE_DOWN: - return sprintf(buf, "DOWN\n"); + return sysfs_emit(buf, "DOWN\n"); case CARD_STATE_SOFTSETUP: if (card->dev->flags & IFF_UP) - return sprintf(buf, "UP (LAN %s)\n", - netif_carrier_ok(card->dev) ? "ONLINE" : - "OFFLINE"); - return sprintf(buf, "SOFTSETUP\n"); + return sysfs_emit(buf, "UP (LAN %s)\n", + netif_carrier_ok(card->dev) ? + "ONLINE" : "OFFLINE"); + return sysfs_emit(buf, "SOFTSETUP\n"); default: - return sprintf(buf, "UNKNOWN\n"); + return sysfs_emit(buf, "UNKNOWN\n"); } } @@ -42,7 +42,7 @@ static ssize_t qeth_dev_chpid_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%02X\n", card->info.chpid); + return sysfs_emit(buf, "%02X\n", card->info.chpid); } static DEVICE_ATTR(chpid, 0444, qeth_dev_chpid_show, NULL); @@ -52,7 +52,7 @@ static ssize_t qeth_dev_if_name_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%s\n", netdev_name(card->dev)); + return sysfs_emit(buf, "%s\n", netdev_name(card->dev)); } static DEVICE_ATTR(if_name, 0444, qeth_dev_if_name_show, NULL); @@ -62,7 +62,7 @@ static ssize_t qeth_dev_card_type_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%s\n", qeth_get_cardname_short(card)); + return sysfs_emit(buf, "%s\n", qeth_get_cardname_short(card)); } static DEVICE_ATTR(card_type, 0444, qeth_dev_card_type_show, NULL); @@ -86,7 +86,7 @@ static ssize_t qeth_dev_inbuf_size_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%s\n", qeth_get_bufsize_str(card)); + return sysfs_emit(buf, "%s\n", qeth_get_bufsize_str(card)); } static DEVICE_ATTR(inbuf_size, 0444, qeth_dev_inbuf_size_show, NULL); @@ -96,7 +96,7 @@ static ssize_t qeth_dev_portno_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->dev->dev_port); + return sysfs_emit(buf, "%i\n", card->dev->dev_port); } static ssize_t qeth_dev_portno_store(struct device *dev, @@ -134,7 +134,7 @@ static DEVICE_ATTR(portno, 0644, qeth_dev_portno_show, qeth_dev_portno_store); static ssize_t qeth_dev_portname_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "no portname required\n"); + return sysfs_emit(buf, "no portname required\n"); } static ssize_t qeth_dev_portname_store(struct device *dev, @@ -157,18 +157,18 @@ static ssize_t qeth_dev_prioqing_show(struct device *dev, switch (card->qdio.do_prio_queueing) { case QETH_PRIO_Q_ING_PREC: - return sprintf(buf, "%s\n", "by precedence"); + return sysfs_emit(buf, "%s\n", "by precedence"); case QETH_PRIO_Q_ING_TOS: - return sprintf(buf, "%s\n", "by type of service"); + return sysfs_emit(buf, "%s\n", "by type of service"); case QETH_PRIO_Q_ING_SKB: - return sprintf(buf, "%s\n", "by skb-priority"); + return sysfs_emit(buf, "%s\n", "by skb-priority"); case QETH_PRIO_Q_ING_VLAN: - return sprintf(buf, "%s\n", "by VLAN headers"); + return sysfs_emit(buf, "%s\n", "by VLAN headers"); case QETH_PRIO_Q_ING_FIXED: - return sprintf(buf, "always queue %i\n", + return sysfs_emit(buf, "always queue %i\n", card->qdio.default_out_queue); default: - return sprintf(buf, "disabled\n"); + return sysfs_emit(buf, "disabled\n"); } } @@ -242,7 +242,7 @@ static ssize_t qeth_dev_bufcnt_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->qdio.in_buf_pool.buf_count); + return sysfs_emit(buf, "%i\n", card->qdio.in_buf_pool.buf_count); } static ssize_t qeth_dev_bufcnt_store(struct device *dev, @@ -298,7 +298,7 @@ static DEVICE_ATTR(recover, 0200, NULL, qeth_dev_recover_store); static ssize_t qeth_dev_performance_stats_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "1\n"); + return sysfs_emit(buf, "1\n"); } static ssize_t qeth_dev_performance_stats_store(struct device *dev, @@ -335,7 +335,7 @@ static ssize_t qeth_dev_layer2_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->options.layer); + return sysfs_emit(buf, "%i\n", card->options.layer); } static ssize_t qeth_dev_layer2_store(struct device *dev, @@ -470,23 +470,25 @@ static ssize_t qeth_dev_switch_attrs_show(struct device *dev, int rc = 0; if (!qeth_card_hw_is_reachable(card)) - return sprintf(buf, "n/a\n"); + return sysfs_emit(buf, "n/a\n"); rc = qeth_query_switch_attributes(card, &sw_info); if (rc) return rc; if (!sw_info.capabilities) - rc = sprintf(buf, "unknown"); + rc = sysfs_emit(buf, "unknown"); if (sw_info.capabilities & QETH_SWITCH_FORW_802_1) - rc = sprintf(buf, (sw_info.settings & QETH_SWITCH_FORW_802_1 ? - "[802.1]" : "802.1")); + rc = sysfs_emit(buf, + (sw_info.settings & QETH_SWITCH_FORW_802_1 ? + "[802.1]" : "802.1")); if (sw_info.capabilities & QETH_SWITCH_FORW_REFL_RELAY) - rc += sprintf(buf + rc, - (sw_info.settings & QETH_SWITCH_FORW_REFL_RELAY ? - " [rr]" : " rr")); - rc += sprintf(buf + rc, "\n"); + rc += sysfs_emit_at(buf, rc, + (sw_info.settings & + QETH_SWITCH_FORW_REFL_RELAY ? + " [rr]" : " rr")); + rc += sysfs_emit_at(buf, rc, "\n"); return rc; } @@ -573,7 +575,7 @@ static ssize_t qeth_dev_blkt_total_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->info.blkt.time_total); + return sysfs_emit(buf, "%i\n", card->info.blkt.time_total); } static ssize_t qeth_dev_blkt_total_store(struct device *dev, @@ -593,7 +595,7 @@ static ssize_t qeth_dev_blkt_inter_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->info.blkt.inter_packet); + return sysfs_emit(buf, "%i\n", card->info.blkt.inter_packet); } static ssize_t qeth_dev_blkt_inter_store(struct device *dev, @@ -613,7 +615,7 @@ static ssize_t qeth_dev_blkt_inter_jumbo_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->info.blkt.inter_packet_jumbo); + return sysfs_emit(buf, "%i\n", card->info.blkt.inter_packet_jumbo); } static ssize_t qeth_dev_blkt_inter_jumbo_store(struct device *dev, diff --git a/drivers/s390/net/qeth_ethtool.c b/drivers/s390/net/qeth_ethtool.c index e250f49535fa..c1caf7734c3e 100644 --- a/drivers/s390/net/qeth_ethtool.c +++ b/drivers/s390/net/qeth_ethtool.c @@ -172,7 +172,7 @@ static void qeth_get_strings(struct net_device *dev, u32 stringset, u8 *data) qeth_add_stat_strings(&data, prefix, card_stats, CARD_STATS_LEN); for (i = 0; i < card->qdio.no_out_queues; i++) { - snprintf(prefix, ETH_GSTRING_LEN, "tx%u ", i); + scnprintf(prefix, ETH_GSTRING_LEN, "tx%u ", i); qeth_add_stat_strings(&data, prefix, txq_stats, TXQ_STATS_LEN); } @@ -192,8 +192,8 @@ static void qeth_get_drvinfo(struct net_device *dev, sizeof(info->driver)); strscpy(info->fw_version, card->info.mcl_level, sizeof(info->fw_version)); - snprintf(info->bus_info, sizeof(info->bus_info), "%s/%s/%s", - CARD_RDEV_ID(card), CARD_WDEV_ID(card), CARD_DDEV_ID(card)); + scnprintf(info->bus_info, sizeof(info->bus_info), "%s/%s/%s", + CARD_RDEV_ID(card), CARD_WDEV_ID(card), CARD_DDEV_ID(card)); } static void qeth_get_channels(struct net_device *dev, diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c index c6ded3fdd715..9f13ed170a43 100644 --- a/drivers/s390/net/qeth_l2_main.c +++ b/drivers/s390/net/qeth_l2_main.c @@ -1255,37 +1255,38 @@ static void qeth_bridge_emit_host_event(struct qeth_card *card, switch (evtype) { case anev_reg_unreg: - snprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=%s", - (code & IPA_ADDR_CHANGE_CODE_REMOVAL) - ? "deregister" : "register"); + scnprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=%s", + (code & IPA_ADDR_CHANGE_CODE_REMOVAL) + ? "deregister" : "register"); env[i] = str[i]; i++; if (code & IPA_ADDR_CHANGE_CODE_VLANID) { - snprintf(str[i], sizeof(str[i]), "VLAN=%d", - addr_lnid->lnid); + scnprintf(str[i], sizeof(str[i]), "VLAN=%d", + addr_lnid->lnid); env[i] = str[i]; i++; } if (code & IPA_ADDR_CHANGE_CODE_MACADDR) { - snprintf(str[i], sizeof(str[i]), "MAC=%pM", - addr_lnid->mac); + scnprintf(str[i], sizeof(str[i]), "MAC=%pM", + addr_lnid->mac); env[i] = str[i]; i++; } - snprintf(str[i], sizeof(str[i]), "NTOK_BUSID=%x.%x.%04x", - token->cssid, token->ssid, token->devnum); + scnprintf(str[i], sizeof(str[i]), "NTOK_BUSID=%x.%x.%04x", + token->cssid, token->ssid, token->devnum); env[i] = str[i]; i++; - snprintf(str[i], sizeof(str[i]), "NTOK_IID=%02x", token->iid); + scnprintf(str[i], sizeof(str[i]), "NTOK_IID=%02x", token->iid); env[i] = str[i]; i++; - snprintf(str[i], sizeof(str[i]), "NTOK_CHPID=%02x", - token->chpid); + scnprintf(str[i], sizeof(str[i]), "NTOK_CHPID=%02x", + token->chpid); env[i] = str[i]; i++; - snprintf(str[i], sizeof(str[i]), "NTOK_CHID=%04x", token->chid); + scnprintf(str[i], sizeof(str[i]), "NTOK_CHID=%04x", + token->chid); env[i] = str[i]; i++; break; case anev_abort: - snprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=abort"); + scnprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=abort"); env[i] = str[i]; i++; break; case anev_reset: - snprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=reset"); + scnprintf(str[i], sizeof(str[i]), "BRIDGEDHOST=reset"); env[i] = str[i]; i++; break; } @@ -1314,17 +1315,17 @@ static void qeth_bridge_state_change_worker(struct work_struct *work) NULL }; - snprintf(env_locrem, sizeof(env_locrem), "BRIDGEPORT=statechange"); - snprintf(env_role, sizeof(env_role), "ROLE=%s", - (data->role == QETH_SBP_ROLE_NONE) ? "none" : - (data->role == QETH_SBP_ROLE_PRIMARY) ? "primary" : - (data->role == QETH_SBP_ROLE_SECONDARY) ? "secondary" : - "<INVALID>"); - snprintf(env_state, sizeof(env_state), "STATE=%s", - (data->state == QETH_SBP_STATE_INACTIVE) ? "inactive" : - (data->state == QETH_SBP_STATE_STANDBY) ? "standby" : - (data->state == QETH_SBP_STATE_ACTIVE) ? "active" : - "<INVALID>"); + scnprintf(env_locrem, sizeof(env_locrem), "BRIDGEPORT=statechange"); + scnprintf(env_role, sizeof(env_role), "ROLE=%s", + (data->role == QETH_SBP_ROLE_NONE) ? "none" : + (data->role == QETH_SBP_ROLE_PRIMARY) ? "primary" : + (data->role == QETH_SBP_ROLE_SECONDARY) ? "secondary" : + "<INVALID>"); + scnprintf(env_state, sizeof(env_state), "STATE=%s", + (data->state == QETH_SBP_STATE_INACTIVE) ? "inactive" : + (data->state == QETH_SBP_STATE_STANDBY) ? "standby" : + (data->state == QETH_SBP_STATE_ACTIVE) ? "active" : + "<INVALID>"); kobject_uevent_env(&data->card->gdev->dev.kobj, KOBJ_CHANGE, env); kfree(data); diff --git a/drivers/s390/net/qeth_l2_sys.c b/drivers/s390/net/qeth_l2_sys.c index a617351fff57..7f592f912517 100644 --- a/drivers/s390/net/qeth_l2_sys.c +++ b/drivers/s390/net/qeth_l2_sys.c @@ -19,7 +19,7 @@ static ssize_t qeth_bridge_port_role_state_show(struct device *dev, char *word; if (!qeth_bridgeport_allowed(card)) - return sprintf(buf, "n/a (VNIC characteristics)\n"); + return sysfs_emit(buf, "n/a (VNIC characteristics)\n"); mutex_lock(&card->sbp_lock); if (qeth_card_hw_is_reachable(card) && @@ -53,7 +53,7 @@ static ssize_t qeth_bridge_port_role_state_show(struct device *dev, QETH_CARD_TEXT_(card, 2, "SBP%02x:%02x", card->options.sbp.role, state); else - rc = sprintf(buf, "%s\n", word); + rc = sysfs_emit(buf, "%s\n", word); } mutex_unlock(&card->sbp_lock); @@ -66,7 +66,7 @@ static ssize_t qeth_bridge_port_role_show(struct device *dev, struct qeth_card *card = dev_get_drvdata(dev); if (!qeth_bridgeport_allowed(card)) - return sprintf(buf, "n/a (VNIC characteristics)\n"); + return sysfs_emit(buf, "n/a (VNIC characteristics)\n"); return qeth_bridge_port_role_state_show(dev, attr, buf, 0); } @@ -117,7 +117,7 @@ static ssize_t qeth_bridge_port_state_show(struct device *dev, struct qeth_card *card = dev_get_drvdata(dev); if (!qeth_bridgeport_allowed(card)) - return sprintf(buf, "n/a (VNIC characteristics)\n"); + return sysfs_emit(buf, "n/a (VNIC characteristics)\n"); return qeth_bridge_port_role_state_show(dev, attr, buf, 1); } @@ -132,11 +132,11 @@ static ssize_t qeth_bridgeport_hostnotification_show(struct device *dev, int enabled; if (!qeth_bridgeport_allowed(card)) - return sprintf(buf, "n/a (VNIC characteristics)\n"); + return sysfs_emit(buf, "n/a (VNIC characteristics)\n"); enabled = card->options.sbp.hostnotification; - return sprintf(buf, "%d\n", enabled); + return sysfs_emit(buf, "%d\n", enabled); } static ssize_t qeth_bridgeport_hostnotification_store(struct device *dev, @@ -180,7 +180,7 @@ static ssize_t qeth_bridgeport_reflect_show(struct device *dev, char *state; if (!qeth_bridgeport_allowed(card)) - return sprintf(buf, "n/a (VNIC characteristics)\n"); + return sysfs_emit(buf, "n/a (VNIC characteristics)\n"); if (card->options.sbp.reflect_promisc) { if (card->options.sbp.reflect_promisc_primary) @@ -190,7 +190,7 @@ static ssize_t qeth_bridgeport_reflect_show(struct device *dev, } else state = "none"; - return sprintf(buf, "%s\n", state); + return sysfs_emit(buf, "%s\n", state); } static ssize_t qeth_bridgeport_reflect_store(struct device *dev, @@ -280,10 +280,10 @@ static ssize_t qeth_vnicc_timeout_show(struct device *dev, rc = qeth_l2_vnicc_get_timeout(card, &timeout); if (rc == -EBUSY) - return sprintf(buf, "n/a (BridgePort)\n"); + return sysfs_emit(buf, "n/a (BridgePort)\n"); if (rc == -EOPNOTSUPP) - return sprintf(buf, "n/a\n"); - return rc ? rc : sprintf(buf, "%d\n", timeout); + return sysfs_emit(buf, "n/a\n"); + return rc ? rc : sysfs_emit(buf, "%d\n", timeout); } /* change timeout setting */ @@ -318,10 +318,10 @@ static ssize_t qeth_vnicc_char_show(struct device *dev, rc = qeth_l2_vnicc_get_state(card, vnicc, &state); if (rc == -EBUSY) - return sprintf(buf, "n/a (BridgePort)\n"); + return sysfs_emit(buf, "n/a (BridgePort)\n"); if (rc == -EOPNOTSUPP) - return sprintf(buf, "n/a\n"); - return rc ? rc : sprintf(buf, "%d\n", state); + return sysfs_emit(buf, "n/a\n"); + return rc ? rc : sysfs_emit(buf, "%d\n", state); } /* change setting of characteristic */ diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c index d8487a10cd55..af4e60d2917e 100644 --- a/drivers/s390/net/qeth_l3_main.c +++ b/drivers/s390/net/qeth_l3_main.c @@ -19,6 +19,7 @@ #include <linux/etherdevice.h> #include <linux/ip.h> #include <linux/in.h> +#include <linux/inet.h> #include <linux/ipv6.h> #include <linux/inetdevice.h> #include <linux/igmp.h> @@ -46,9 +47,9 @@ int qeth_l3_ipaddr_to_string(enum qeth_prot_versions proto, const u8 *addr, char *buf) { if (proto == QETH_PROT_IPV4) - return sprintf(buf, "%pI4", addr); + return scnprintf(buf, INET_ADDRSTRLEN, "%pI4", addr); else - return sprintf(buf, "%pI6", addr); + return scnprintf(buf, INET6_ADDRSTRLEN, "%pI6", addr); } static struct qeth_ipaddr *qeth_l3_find_addr_by_ip(struct qeth_card *card, @@ -158,7 +159,7 @@ static int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) { int rc = 0; struct qeth_ipaddr *addr; - char buf[40]; + char buf[INET6_ADDRSTRLEN]; if (tmp_addr->type == QETH_IP_TYPE_RXIP) QETH_CARD_TEXT(card, 2, "addrxip"); diff --git a/drivers/s390/net/qeth_l3_sys.c b/drivers/s390/net/qeth_l3_sys.c index 1082380b21f8..9f90a860ca2c 100644 --- a/drivers/s390/net/qeth_l3_sys.c +++ b/drivers/s390/net/qeth_l3_sys.c @@ -32,26 +32,26 @@ static ssize_t qeth_l3_dev_route_show(struct qeth_card *card, { switch (route->type) { case PRIMARY_ROUTER: - return sprintf(buf, "%s\n", "primary router"); + return sysfs_emit(buf, "%s\n", "primary router"); case SECONDARY_ROUTER: - return sprintf(buf, "%s\n", "secondary router"); + return sysfs_emit(buf, "%s\n", "secondary router"); case MULTICAST_ROUTER: if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "multicast router+"); + return sysfs_emit(buf, "%s\n", "multicast router+"); else - return sprintf(buf, "%s\n", "multicast router"); + return sysfs_emit(buf, "%s\n", "multicast router"); case PRIMARY_CONNECTOR: if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "primary connector+"); + return sysfs_emit(buf, "%s\n", "primary connector+"); else - return sprintf(buf, "%s\n", "primary connector"); + return sysfs_emit(buf, "%s\n", "primary connector"); case SECONDARY_CONNECTOR: if (card->info.broadcast_capable == QETH_BROADCAST_WITHOUT_ECHO) - return sprintf(buf, "%s\n", "secondary connector+"); + return sysfs_emit(buf, "%s\n", "secondary connector+"); else - return sprintf(buf, "%s\n", "secondary connector"); + return sysfs_emit(buf, "%s\n", "secondary connector"); default: - return sprintf(buf, "%s\n", "no"); + return sysfs_emit(buf, "%s\n", "no"); } } @@ -138,7 +138,7 @@ static ssize_t qeth_l3_dev_sniffer_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%i\n", card->options.sniffer ? 1 : 0); + return sysfs_emit(buf, "%i\n", card->options.sniffer ? 1 : 0); } static ssize_t qeth_l3_dev_sniffer_store(struct device *dev, @@ -200,7 +200,7 @@ static ssize_t qeth_l3_dev_hsuid_show(struct device *dev, memcpy(tmp_hsuid, card->options.hsuid, sizeof(tmp_hsuid)); EBCASC(tmp_hsuid, 8); - return sprintf(buf, "%s\n", tmp_hsuid); + return sysfs_emit(buf, "%s\n", tmp_hsuid); } static ssize_t qeth_l3_dev_hsuid_store(struct device *dev, @@ -252,8 +252,8 @@ static ssize_t qeth_l3_dev_hsuid_store(struct device *dev, goto out; } - snprintf(card->options.hsuid, sizeof(card->options.hsuid), - "%-8s", tmp); + scnprintf(card->options.hsuid, sizeof(card->options.hsuid), + "%-8s", tmp); ASCEBC(card->options.hsuid, 8); memcpy(card->dev->perm_addr, card->options.hsuid, 9); @@ -285,7 +285,7 @@ static ssize_t qeth_l3_dev_ipato_enable_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%u\n", card->ipato.enabled ? 1 : 0); + return sysfs_emit(buf, "%u\n", card->ipato.enabled ? 1 : 0); } static ssize_t qeth_l3_dev_ipato_enable_store(struct device *dev, @@ -330,7 +330,7 @@ static ssize_t qeth_l3_dev_ipato_invert4_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%u\n", card->ipato.invert4 ? 1 : 0); + return sysfs_emit(buf, "%u\n", card->ipato.invert4 ? 1 : 0); } static ssize_t qeth_l3_dev_ipato_invert4_store(struct device *dev, @@ -367,35 +367,21 @@ static ssize_t qeth_l3_dev_ipato_add_show(char *buf, struct qeth_card *card, enum qeth_prot_versions proto) { struct qeth_ipato_entry *ipatoe; - int str_len = 0; + char addr_str[INET6_ADDRSTRLEN]; + int offset = 0; mutex_lock(&card->ip_lock); list_for_each_entry(ipatoe, &card->ipato.entries, entry) { - char addr_str[40]; - int entry_len; - if (ipatoe->proto != proto) continue; - entry_len = qeth_l3_ipaddr_to_string(proto, ipatoe->addr, - addr_str); - if (entry_len < 0) - continue; - - /* Append /%mask to the entry: */ - entry_len += 1 + ((proto == QETH_PROT_IPV4) ? 2 : 3); - /* Enough room to format %entry\n into null terminated page? */ - if (entry_len + 1 > PAGE_SIZE - str_len - 1) - break; - - entry_len = scnprintf(buf, PAGE_SIZE - str_len, - "%s/%i\n", addr_str, ipatoe->mask_bits); - str_len += entry_len; - buf += entry_len; + qeth_l3_ipaddr_to_string(proto, ipatoe->addr, addr_str); + offset += sysfs_emit_at(buf, offset, "%s/%i\n", + addr_str, ipatoe->mask_bits); } mutex_unlock(&card->ip_lock); - return str_len ? str_len : scnprintf(buf, PAGE_SIZE, "\n"); + return offset ? offset : sysfs_emit(buf, "\n"); } static ssize_t qeth_l3_dev_ipato_add4_show(struct device *dev, @@ -413,7 +399,7 @@ static int qeth_l3_parse_ipatoe(const char *buf, enum qeth_prot_versions proto, int rc; /* Expected input pattern: %addr/%mask */ - sep = strnchr(buf, 40, '/'); + sep = strnchr(buf, INET6_ADDRSTRLEN, '/'); if (!sep) return -EINVAL; @@ -501,7 +487,7 @@ static ssize_t qeth_l3_dev_ipato_invert6_show(struct device *dev, { struct qeth_card *card = dev_get_drvdata(dev); - return sprintf(buf, "%u\n", card->ipato.invert6 ? 1 : 0); + return sysfs_emit(buf, "%u\n", card->ipato.invert6 ? 1 : 0); } static ssize_t qeth_l3_dev_ipato_invert6_store(struct device *dev, @@ -586,35 +572,22 @@ static ssize_t qeth_l3_dev_ip_add_show(struct device *dev, char *buf, enum qeth_ip_types type) { struct qeth_card *card = dev_get_drvdata(dev); + char addr_str[INET6_ADDRSTRLEN]; struct qeth_ipaddr *ipaddr; - int str_len = 0; + int offset = 0; int i; mutex_lock(&card->ip_lock); hash_for_each(card->ip_htable, i, ipaddr, hnode) { - char addr_str[40]; - int entry_len; - if (ipaddr->proto != proto || ipaddr->type != type) continue; - entry_len = qeth_l3_ipaddr_to_string(proto, (u8 *)&ipaddr->u, - addr_str); - if (entry_len < 0) - continue; - - /* Enough room to format %addr\n into null terminated page? */ - if (entry_len + 1 > PAGE_SIZE - str_len - 1) - break; - - entry_len = scnprintf(buf, PAGE_SIZE - str_len, "%s\n", - addr_str); - str_len += entry_len; - buf += entry_len; + qeth_l3_ipaddr_to_string(proto, (u8 *)&ipaddr->u, addr_str); + offset += sysfs_emit_at(buf, offset, "%s\n", addr_str); } mutex_unlock(&card->ip_lock); - return str_len ? str_len : scnprintf(buf, PAGE_SIZE, "\n"); + return offset ? offset : sysfs_emit(buf, "\n"); } static ssize_t qeth_l3_dev_vipa_add4_show(struct device *dev, diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c index 37d6af2ec427..7fa66501792d 100644 --- a/drivers/tty/serial/8250/8250_dma.c +++ b/drivers/tty/serial/8250/8250_dma.c @@ -43,15 +43,23 @@ static void __dma_rx_complete(struct uart_8250_port *p) struct uart_8250_dma *dma = p->dma; struct tty_port *tty_port = &p->port.state->port; struct dma_tx_state state; + enum dma_status dma_status; int count; - dma->rx_running = 0; - dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); + /* + * New DMA Rx can be started during the completion handler before it + * could acquire port's lock and it might still be ongoing. Don't to + * anything in such case. + */ + dma_status = dmaengine_tx_status(dma->rxchan, dma->rx_cookie, &state); + if (dma_status == DMA_IN_PROGRESS) + return; count = dma->rx_size - state.residue; tty_insert_flip_string(tty_port, dma->rx_buf, count); p->port.icount.rx += count; + dma->rx_running = 0; tty_flip_buffer_push(tty_port); } @@ -62,9 +70,14 @@ static void dma_rx_complete(void *param) struct uart_8250_dma *dma = p->dma; unsigned long flags; - __dma_rx_complete(p); - spin_lock_irqsave(&p->port.lock, flags); + if (dma->rx_running) + __dma_rx_complete(p); + + /* + * Cannot be combined with the previous check because __dma_rx_complete() + * changes dma->rx_running. + */ if (!dma->rx_running && (serial_lsr_in(p) & UART_LSR_DR)) p->dma->rx_dma(p); spin_unlock_irqrestore(&p->port.lock, flags); diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c index a1490033aa16..409e91d6829a 100644 --- a/drivers/tty/serial/stm32-usart.c +++ b/drivers/tty/serial/stm32-usart.c @@ -797,25 +797,11 @@ static irqreturn_t stm32_usart_interrupt(int irq, void *ptr) spin_unlock(&port->lock); } - if (stm32_usart_rx_dma_enabled(port)) - return IRQ_WAKE_THREAD; - else - return IRQ_HANDLED; -} - -static irqreturn_t stm32_usart_threaded_interrupt(int irq, void *ptr) -{ - struct uart_port *port = ptr; - struct tty_port *tport = &port->state->port; - struct stm32_port *stm32_port = to_stm32_port(port); - unsigned int size; - unsigned long flags; - /* Receiver timeout irq for DMA RX */ - if (!stm32_port->throttled) { - spin_lock_irqsave(&port->lock, flags); + if (stm32_usart_rx_dma_enabled(port) && !stm32_port->throttled) { + spin_lock(&port->lock); size = stm32_usart_receive_chars(port, false); - uart_unlock_and_check_sysrq_irqrestore(port, flags); + uart_unlock_and_check_sysrq(port); if (size) tty_flip_buffer_push(tport); } @@ -1015,10 +1001,8 @@ static int stm32_usart_startup(struct uart_port *port) u32 val; int ret; - ret = request_threaded_irq(port->irq, stm32_usart_interrupt, - stm32_usart_threaded_interrupt, - IRQF_ONESHOT | IRQF_NO_SUSPEND, - name, port); + ret = request_irq(port->irq, stm32_usart_interrupt, + IRQF_NO_SUSPEND, name, port); if (ret) return ret; @@ -1601,13 +1585,6 @@ static int stm32_usart_of_dma_rx_probe(struct stm32_port *stm32port, struct dma_slave_config config; int ret; - /* - * Using DMA and threaded handler for the console could lead to - * deadlocks. - */ - if (uart_console(port)) - return -ENODEV; - stm32port->rx_buf = dma_alloc_coherent(dev, RX_BUF_L, &stm32port->rx_dma_buf, GFP_KERNEL); diff --git a/drivers/tty/vt/vc_screen.c b/drivers/tty/vt/vc_screen.c index 1850bacdb5b0..f566eb1839dc 100644 --- a/drivers/tty/vt/vc_screen.c +++ b/drivers/tty/vt/vc_screen.c @@ -386,10 +386,6 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) uni_mode = use_unicode(inode); attr = use_attributes(inode); - ret = -ENXIO; - vc = vcs_vc(inode, &viewed); - if (!vc) - goto unlock_out; ret = -EINVAL; if (pos < 0) @@ -407,6 +403,11 @@ vcs_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) unsigned int this_round, skip = 0; int size; + ret = -ENXIO; + vc = vcs_vc(inode, &viewed); + if (!vc) + goto unlock_out; + /* Check whether we are above size each round, * as copy_to_user at the end of this loop * could sleep. diff --git a/drivers/usb/dwc3/dwc3-qcom.c b/drivers/usb/dwc3/dwc3-qcom.c index b0a0351d2d8b..959fc925ca7c 100644 --- a/drivers/usb/dwc3/dwc3-qcom.c +++ b/drivers/usb/dwc3/dwc3-qcom.c @@ -901,7 +901,7 @@ static int dwc3_qcom_probe(struct platform_device *pdev) qcom->mode = usb_get_dr_mode(&qcom->dwc3->dev); /* enable vbus override for device mode */ - if (qcom->mode == USB_DR_MODE_PERIPHERAL) + if (qcom->mode != USB_DR_MODE_HOST) dwc3_qcom_vbus_override_enable(qcom, true); /* register extcon to override sw_vbus on Vbus change later */ diff --git a/drivers/usb/fotg210/fotg210-udc.c b/drivers/usb/fotg210/fotg210-udc.c index 87cca81bf4ac..eb076746f032 100644 --- a/drivers/usb/fotg210/fotg210-udc.c +++ b/drivers/usb/fotg210/fotg210-udc.c @@ -1014,7 +1014,6 @@ static int fotg210_udc_start(struct usb_gadget *g, int ret; /* hook up the driver */ - driver->driver.bus = NULL; fotg210->driver = driver; if (!IS_ERR_OR_NULL(fotg210->phy)) { diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 523a961b910b..8ad354741380 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -279,8 +279,10 @@ static int __ffs_ep0_queue_wait(struct ffs_data *ffs, char *data, size_t len) struct usb_request *req = ffs->ep0req; int ret; - if (!req) + if (!req) { + spin_unlock_irq(&ffs->ev.waitq.lock); return -EINVAL; + } req->zero = len < le16_to_cpu(ffs->ev.setup.wLength); diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c index 08726e4c68a5..0219cd79493a 100644 --- a/drivers/usb/gadget/function/f_uac2.c +++ b/drivers/usb/gadget/function/f_uac2.c @@ -1142,6 +1142,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn) } std_as_out_if0_desc.bInterfaceNumber = ret; std_as_out_if1_desc.bInterfaceNumber = ret; + std_as_out_if1_desc.bNumEndpoints = 1; uac2->as_out_intf = ret; uac2->as_out_alt = 0; diff --git a/drivers/usb/gadget/udc/bcm63xx_udc.c b/drivers/usb/gadget/udc/bcm63xx_udc.c index 2cdb07905bde..d04d72f5816e 100644 --- a/drivers/usb/gadget/udc/bcm63xx_udc.c +++ b/drivers/usb/gadget/udc/bcm63xx_udc.c @@ -1830,7 +1830,6 @@ static int bcm63xx_udc_start(struct usb_gadget *gadget, bcm63xx_select_phy_mode(udc, true); udc->driver = driver; - driver->driver.bus = NULL; udc->gadget.dev.of_node = udc->dev->of_node; spin_unlock_irqrestore(&udc->lock, flags); diff --git a/drivers/usb/gadget/udc/fsl_qe_udc.c b/drivers/usb/gadget/udc/fsl_qe_udc.c index bf745358e28e..3b1cc8fa30c8 100644 --- a/drivers/usb/gadget/udc/fsl_qe_udc.c +++ b/drivers/usb/gadget/udc/fsl_qe_udc.c @@ -2285,7 +2285,6 @@ static int fsl_qe_start(struct usb_gadget *gadget, /* lock is needed but whether should use this lock or another */ spin_lock_irqsave(&udc->lock, flags); - driver->driver.bus = NULL; /* hook up the driver */ udc->driver = driver; udc->gadget.speed = driver->max_speed; diff --git a/drivers/usb/gadget/udc/fsl_udc_core.c b/drivers/usb/gadget/udc/fsl_udc_core.c index 50435e804118..a67873a074b7 100644 --- a/drivers/usb/gadget/udc/fsl_udc_core.c +++ b/drivers/usb/gadget/udc/fsl_udc_core.c @@ -1943,7 +1943,6 @@ static int fsl_udc_start(struct usb_gadget *g, /* lock is needed but whether should use this lock or another */ spin_lock_irqsave(&udc_controller->lock, flags); - driver->driver.bus = NULL; /* hook up the driver */ udc_controller->driver = driver; spin_unlock_irqrestore(&udc_controller->lock, flags); diff --git a/drivers/usb/gadget/udc/fusb300_udc.c b/drivers/usb/gadget/udc/fusb300_udc.c index 9af8b415f303..5954800d652c 100644 --- a/drivers/usb/gadget/udc/fusb300_udc.c +++ b/drivers/usb/gadget/udc/fusb300_udc.c @@ -1311,7 +1311,6 @@ static int fusb300_udc_start(struct usb_gadget *g, struct fusb300 *fusb300 = to_fusb300(g); /* hook up the driver */ - driver->driver.bus = NULL; fusb300->driver = driver; return 0; diff --git a/drivers/usb/gadget/udc/goku_udc.c b/drivers/usb/gadget/udc/goku_udc.c index bdc56b24b5c9..5ffb3d5c635b 100644 --- a/drivers/usb/gadget/udc/goku_udc.c +++ b/drivers/usb/gadget/udc/goku_udc.c @@ -1375,7 +1375,6 @@ static int goku_udc_start(struct usb_gadget *g, struct goku_udc *dev = to_goku_udc(g); /* hook up the driver */ - driver->driver.bus = NULL; dev->driver = driver; /* diff --git a/drivers/usb/gadget/udc/gr_udc.c b/drivers/usb/gadget/udc/gr_udc.c index 22096f8505de..85cdc0af3bf9 100644 --- a/drivers/usb/gadget/udc/gr_udc.c +++ b/drivers/usb/gadget/udc/gr_udc.c @@ -1906,7 +1906,6 @@ static int gr_udc_start(struct usb_gadget *gadget, spin_lock(&dev->lock); /* Hook up the driver */ - driver->driver.bus = NULL; dev->driver = driver; /* Get ready for host detection */ diff --git a/drivers/usb/gadget/udc/m66592-udc.c b/drivers/usb/gadget/udc/m66592-udc.c index c7e421b449f3..06e21cee431b 100644 --- a/drivers/usb/gadget/udc/m66592-udc.c +++ b/drivers/usb/gadget/udc/m66592-udc.c @@ -1454,7 +1454,6 @@ static int m66592_udc_start(struct usb_gadget *g, struct m66592 *m66592 = to_m66592(g); /* hook up the driver */ - driver->driver.bus = NULL; m66592->driver = driver; m66592_bset(m66592, M66592_VBSE | M66592_URST, M66592_INTENB0); diff --git a/drivers/usb/gadget/udc/max3420_udc.c b/drivers/usb/gadget/udc/max3420_udc.c index 3074da00c3df..ddf0ed3eb4f2 100644 --- a/drivers/usb/gadget/udc/max3420_udc.c +++ b/drivers/usb/gadget/udc/max3420_udc.c @@ -1108,7 +1108,6 @@ static int max3420_udc_start(struct usb_gadget *gadget, spin_lock_irqsave(&udc->lock, flags); /* hook up the driver */ - driver->driver.bus = NULL; udc->driver = driver; udc->gadget.speed = USB_SPEED_FULL; diff --git a/drivers/usb/gadget/udc/mv_u3d_core.c b/drivers/usb/gadget/udc/mv_u3d_core.c index 598654a3cb41..411b6179782c 100644 --- a/drivers/usb/gadget/udc/mv_u3d_core.c +++ b/drivers/usb/gadget/udc/mv_u3d_core.c @@ -1243,7 +1243,6 @@ static int mv_u3d_start(struct usb_gadget *g, } /* hook up the driver ... */ - driver->driver.bus = NULL; u3d->driver = driver; u3d->ep0_dir = USB_DIR_OUT; diff --git a/drivers/usb/gadget/udc/mv_udc_core.c b/drivers/usb/gadget/udc/mv_udc_core.c index fdb17d86cd65..b397f3a848cf 100644 --- a/drivers/usb/gadget/udc/mv_udc_core.c +++ b/drivers/usb/gadget/udc/mv_udc_core.c @@ -1359,7 +1359,6 @@ static int mv_udc_start(struct usb_gadget *gadget, spin_lock_irqsave(&udc->lock, flags); /* hook up the driver ... */ - driver->driver.bus = NULL; udc->driver = driver; udc->usb_state = USB_STATE_ATTACHED; diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c index 84605a4d0715..538c1b9a2883 100644 --- a/drivers/usb/gadget/udc/net2272.c +++ b/drivers/usb/gadget/udc/net2272.c @@ -1451,7 +1451,6 @@ static int net2272_start(struct usb_gadget *_gadget, dev->ep[i].irqs = 0; /* hook up the driver ... */ dev->softconnect = 1; - driver->driver.bus = NULL; dev->driver = driver; /* ... then enable host detection and ep0; and we're ready diff --git a/drivers/usb/gadget/udc/net2280.c b/drivers/usb/gadget/udc/net2280.c index d6a68631354a..1b929c519cd7 100644 --- a/drivers/usb/gadget/udc/net2280.c +++ b/drivers/usb/gadget/udc/net2280.c @@ -2423,7 +2423,6 @@ static int net2280_start(struct usb_gadget *_gadget, dev->ep[i].irqs = 0; /* hook up the driver ... */ - driver->driver.bus = NULL; dev->driver = driver; retval = device_create_file(&dev->pdev->dev, &dev_attr_function); diff --git a/drivers/usb/gadget/udc/omap_udc.c b/drivers/usb/gadget/udc/omap_udc.c index bea346e362b2..f660ebfa1379 100644 --- a/drivers/usb/gadget/udc/omap_udc.c +++ b/drivers/usb/gadget/udc/omap_udc.c @@ -2066,7 +2066,6 @@ static int omap_udc_start(struct usb_gadget *g, udc->softconnect = 1; /* hook up the driver */ - driver->driver.bus = NULL; udc->driver = driver; spin_unlock_irqrestore(&udc->lock, flags); diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c index 9bb7a9d7a2fb..4f8617210d85 100644 --- a/drivers/usb/gadget/udc/pch_udc.c +++ b/drivers/usb/gadget/udc/pch_udc.c @@ -2908,7 +2908,6 @@ static int pch_udc_start(struct usb_gadget *g, { struct pch_udc_dev *dev = to_pch_udc(g); - driver->driver.bus = NULL; dev->driver = driver; /* get ready for ep0 traffic */ diff --git a/drivers/usb/gadget/udc/snps_udc_core.c b/drivers/usb/gadget/udc/snps_udc_core.c index 52ea4dcf6a92..2fc5d4d277bc 100644 --- a/drivers/usb/gadget/udc/snps_udc_core.c +++ b/drivers/usb/gadget/udc/snps_udc_core.c @@ -1933,7 +1933,6 @@ static int amd5536_udc_start(struct usb_gadget *g, struct udc *dev = to_amd5536_udc(g); u32 tmp; - driver->driver.bus = NULL; dev->driver = driver; /* Some gadget drivers use both ep0 directions. diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c index 1292241d581a..1cf8947c6d66 100644 --- a/drivers/usb/typec/ucsi/ucsi.c +++ b/drivers/usb/typec/ucsi/ucsi.c @@ -1269,6 +1269,9 @@ err_unregister: con->port = NULL; } + kfree(ucsi->connector); + ucsi->connector = NULL; + err_reset: memset(&ucsi->cap, 0, sizeof(ucsi->cap)); ucsi_reset_ppm(ucsi); @@ -1300,7 +1303,8 @@ static void ucsi_resume_work(struct work_struct *work) int ucsi_resume(struct ucsi *ucsi) { - queue_work(system_long_wq, &ucsi->resume_work); + if (ucsi->connector) + queue_work(system_long_wq, &ucsi->resume_work); return 0; } EXPORT_SYMBOL_GPL(ucsi_resume); @@ -1420,6 +1424,9 @@ void ucsi_unregister(struct ucsi *ucsi) /* Disable notifications */ ucsi->ops->async_write(ucsi, UCSI_CONTROL, &cmd, sizeof(cmd)); + if (!ucsi->connector) + return; + for (i = 0; i < ucsi->cap.num_connectors; i++) { cancel_work_sync(&ucsi->connector[i].work); ucsi_unregister_partner(&ucsi->connector[i]); diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c index 1fc8de4ecbeb..8187a7c4f910 100644 --- a/drivers/video/fbdev/atmel_lcdfb.c +++ b/drivers/video/fbdev/atmel_lcdfb.c @@ -49,7 +49,6 @@ struct atmel_lcdfb_info { struct clk *lcdc_clk; struct backlight_device *backlight; - u8 bl_power; u8 saved_lcdcon; u32 pseudo_palette[16]; @@ -109,22 +108,7 @@ static u32 contrast_ctr = ATMEL_LCDC_PS_DIV8 static int atmel_bl_update_status(struct backlight_device *bl) { struct atmel_lcdfb_info *sinfo = bl_get_data(bl); - int power = sinfo->bl_power; - int brightness = bl->props.brightness; - - /* REVISIT there may be a meaningful difference between - * fb_blank and power ... there seem to be some cases - * this doesn't handle correctly. - */ - if (bl->props.fb_blank != sinfo->bl_power) - power = bl->props.fb_blank; - else if (bl->props.power != sinfo->bl_power) - power = bl->props.power; - - if (brightness < 0 && power == FB_BLANK_UNBLANK) - brightness = lcdc_readl(sinfo, ATMEL_LCDC_CONTRAST_VAL); - else if (power != FB_BLANK_UNBLANK) - brightness = 0; + int brightness = backlight_get_brightness(bl); lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_VAL, brightness); if (contrast_ctr & ATMEL_LCDC_POL_POSITIVE) @@ -133,8 +117,6 @@ static int atmel_bl_update_status(struct backlight_device *bl) else lcdc_writel(sinfo, ATMEL_LCDC_CONTRAST_CTR, contrast_ctr); - bl->props.fb_blank = bl->props.power = sinfo->bl_power = power; - return 0; } @@ -155,8 +137,6 @@ static void init_backlight(struct atmel_lcdfb_info *sinfo) struct backlight_properties props; struct backlight_device *bl; - sinfo->bl_power = FB_BLANK_UNBLANK; - if (sinfo->backlight) return; diff --git a/drivers/video/fbdev/aty/aty128fb.c b/drivers/video/fbdev/aty/aty128fb.c index dd31b9d7d337..36a9ac05a340 100644 --- a/drivers/video/fbdev/aty/aty128fb.c +++ b/drivers/video/fbdev/aty/aty128fb.c @@ -1766,12 +1766,10 @@ static int aty128_bl_update_status(struct backlight_device *bd) unsigned int reg = aty_ld_le32(LVDS_GEN_CNTL); int level; - if (bd->props.power != FB_BLANK_UNBLANK || - bd->props.fb_blank != FB_BLANK_UNBLANK || - !par->lcd_on) + if (!par->lcd_on) level = 0; else - level = bd->props.brightness; + level = backlight_get_brightness(bd); reg |= LVDS_BL_MOD_EN | LVDS_BLON; if (level > 0) { diff --git a/drivers/video/fbdev/aty/atyfb_base.c b/drivers/video/fbdev/aty/atyfb_base.c index d59215a4992e..b02e4e645035 100644 --- a/drivers/video/fbdev/aty/atyfb_base.c +++ b/drivers/video/fbdev/aty/atyfb_base.c @@ -2219,13 +2219,7 @@ static int aty_bl_update_status(struct backlight_device *bd) { struct atyfb_par *par = bl_get_data(bd); unsigned int reg = aty_ld_lcd(LCD_MISC_CNTL, par); - int level; - - if (bd->props.power != FB_BLANK_UNBLANK || - bd->props.fb_blank != FB_BLANK_UNBLANK) - level = 0; - else - level = bd->props.brightness; + int level = backlight_get_brightness(bd); reg |= (BLMOD_EN | BIASMOD_EN); if (level > 0) { diff --git a/drivers/video/fbdev/aty/radeon_backlight.c b/drivers/video/fbdev/aty/radeon_backlight.c index d2c1263ad260..427adc838f77 100644 --- a/drivers/video/fbdev/aty/radeon_backlight.c +++ b/drivers/video/fbdev/aty/radeon_backlight.c @@ -57,11 +57,7 @@ static int radeon_bl_update_status(struct backlight_device *bd) * backlight. This provides some greater power saving and the display * is useless without backlight anyway. */ - if (bd->props.power != FB_BLANK_UNBLANK || - bd->props.fb_blank != FB_BLANK_UNBLANK) - level = 0; - else - level = bd->props.brightness; + level = backlight_get_brightness(bd); del_timer_sync(&rinfo->lvds_timer); radeon_engine_idle(); diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c index 14a7d404062c..1b14c21af2b7 100644 --- a/drivers/video/fbdev/core/fbcon.c +++ b/drivers/video/fbdev/core/fbcon.c @@ -2495,9 +2495,12 @@ static int fbcon_set_font(struct vc_data *vc, struct console_font *font, h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres)) return -EINVAL; + if (font->width > 32 || font->height > 32) + return -EINVAL; + /* Make sure drawing engine can handle the font */ - if (!(info->pixmap.blit_x & (1 << (font->width - 1))) || - !(info->pixmap.blit_y & (1 << (font->height - 1)))) + if (!(info->pixmap.blit_x & BIT(font->width - 1)) || + !(info->pixmap.blit_y & BIT(font->height - 1))) return -EINVAL; /* Make sure driver can handle the font length */ diff --git a/drivers/video/fbdev/core/fbmon.c b/drivers/video/fbdev/core/fbmon.c index b0e690f41025..79e5bfbdd34c 100644 --- a/drivers/video/fbdev/core/fbmon.c +++ b/drivers/video/fbdev/core/fbmon.c @@ -1050,7 +1050,7 @@ static u32 fb_get_vblank(u32 hfreq) } /** - * fb_get_hblank_by_freq - get horizontal blank time given hfreq + * fb_get_hblank_by_hfreq - get horizontal blank time given hfreq * @hfreq: horizontal freq * @xres: horizontal resolution in pixels * diff --git a/drivers/video/fbdev/mx3fb.c b/drivers/video/fbdev/mx3fb.c index b945b68984b9..76771e126d0a 100644 --- a/drivers/video/fbdev/mx3fb.c +++ b/drivers/video/fbdev/mx3fb.c @@ -283,12 +283,7 @@ static int mx3fb_bl_get_brightness(struct backlight_device *bl) static int mx3fb_bl_update_status(struct backlight_device *bl) { struct mx3fb_data *fbd = bl_get_data(bl); - int brightness = bl->props.brightness; - - if (bl->props.power != FB_BLANK_UNBLANK) - brightness = 0; - if (bl->props.fb_blank != FB_BLANK_UNBLANK) - brightness = 0; + int brightness = backlight_get_brightness(bl); fbd->backlight_level = (fbd->backlight_level & ~0xFF) | brightness; diff --git a/drivers/video/fbdev/nvidia/nv_backlight.c b/drivers/video/fbdev/nvidia/nv_backlight.c index 2ce53529f636..503a7a683855 100644 --- a/drivers/video/fbdev/nvidia/nv_backlight.c +++ b/drivers/video/fbdev/nvidia/nv_backlight.c @@ -49,17 +49,11 @@ static int nvidia_bl_update_status(struct backlight_device *bd) { struct nvidia_par *par = bl_get_data(bd); u32 tmp_pcrt, tmp_pmc, fpcontrol; - int level; + int level = backlight_get_brightness(bd); if (!par->FlatPanel) return 0; - if (bd->props.power != FB_BLANK_UNBLANK || - bd->props.fb_blank != FB_BLANK_UNBLANK) - level = 0; - else - level = bd->props.brightness; - tmp_pmc = NV_RD32(par->PMC, 0x10F0) & 0x0000FFFF; tmp_pcrt = NV_RD32(par->PCRTC0, 0x081C) & 0xFFFFFFFC; fpcontrol = NV_RD32(par->PRAMDAC, 0x0848) & 0xCFFFFFCC; diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c index 4fc4b26a8d30..ba94a0a7bd4f 100644 --- a/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c +++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-dsi-cm.c @@ -331,13 +331,7 @@ static int dsicm_bl_update_status(struct backlight_device *dev) struct panel_drv_data *ddata = dev_get_drvdata(&dev->dev); struct omap_dss_device *in = ddata->in; int r; - int level; - - if (dev->props.fb_blank == FB_BLANK_UNBLANK && - dev->props.power == FB_BLANK_UNBLANK) - level = dev->props.brightness; - else - level = 0; + int level = backlight_get_brightness(dev); dev_dbg(&ddata->pdev->dev, "update brightness to %d\n", level); diff --git a/drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c b/drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c index bc5a44c2a144..ae937854403b 100644 --- a/drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c +++ b/drivers/video/fbdev/omap2/omapfb/dss/display-sysfs.c @@ -10,6 +10,7 @@ #define DSS_SUBSYS_NAME "DISPLAY" #include <linux/kernel.h> +#include <linux/kstrtox.h> #include <linux/module.h> #include <linux/platform_device.h> #include <linux/sysfs.h> @@ -36,7 +37,7 @@ static ssize_t display_enabled_store(struct omap_dss_device *dssdev, int r; bool enable; - r = strtobool(buf, &enable); + r = kstrtobool(buf, &enable); if (r) return r; @@ -73,7 +74,7 @@ static ssize_t display_tear_store(struct omap_dss_device *dssdev, if (!dssdev->driver->enable_te || !dssdev->driver->get_te) return -ENOENT; - r = strtobool(buf, &te); + r = kstrtobool(buf, &te); if (r) return r; @@ -183,7 +184,7 @@ static ssize_t display_mirror_store(struct omap_dss_device *dssdev, if (!dssdev->driver->set_mirror || !dssdev->driver->get_mirror) return -ENOENT; - r = strtobool(buf, &mirror); + r = kstrtobool(buf, &mirror); if (r) return r; diff --git a/drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c b/drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c index ba21c4a2633d..1b644be5fe2e 100644 --- a/drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c +++ b/drivers/video/fbdev/omap2/omapfb/dss/manager-sysfs.c @@ -10,6 +10,7 @@ #define DSS_SUBSYS_NAME "MANAGER" #include <linux/kernel.h> +#include <linux/kstrtox.h> #include <linux/slab.h> #include <linux/module.h> #include <linux/platform_device.h> @@ -246,7 +247,7 @@ static ssize_t manager_trans_key_enabled_store(struct omap_overlay_manager *mgr, bool enable; int r; - r = strtobool(buf, &enable); + r = kstrtobool(buf, &enable); if (r) return r; @@ -290,7 +291,7 @@ static ssize_t manager_alpha_blending_enabled_store( if(!dss_has_feature(FEAT_ALPHA_FIXED_ZORDER)) return -ENODEV; - r = strtobool(buf, &enable); + r = kstrtobool(buf, &enable); if (r) return r; @@ -329,7 +330,7 @@ static ssize_t manager_cpr_enable_store(struct omap_overlay_manager *mgr, if (!dss_has_feature(FEAT_CPR)) return -ENODEV; - r = strtobool(buf, &enable); + r = kstrtobool(buf, &enable); if (r) return r; diff --git a/drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c b/drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c index 601c0beb6de9..1da4fb1c77b4 100644 --- a/drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c +++ b/drivers/video/fbdev/omap2/omapfb/dss/overlay-sysfs.c @@ -13,6 +13,7 @@ #include <linux/err.h> #include <linux/sysfs.h> #include <linux/kobject.h> +#include <linux/kstrtox.h> #include <linux/platform_device.h> #include <video/omapfb_dss.h> @@ -210,7 +211,7 @@ static ssize_t overlay_enabled_store(struct omap_overlay *ovl, const char *buf, int r; bool enable; - r = strtobool(buf, &enable); + r = kstrtobool(buf, &enable); if (r) return r; diff --git a/drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c b/drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c index 06dc41aa0354..831b2c2fbdf9 100644 --- a/drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c +++ b/drivers/video/fbdev/omap2/omapfb/omapfb-sysfs.c @@ -15,6 +15,7 @@ #include <linux/uaccess.h> #include <linux/platform_device.h> #include <linux/kernel.h> +#include <linux/kstrtox.h> #include <linux/mm.h> #include <linux/omapfb.h> @@ -96,7 +97,7 @@ static ssize_t store_mirror(struct device *dev, int r; struct fb_var_screeninfo new_var; - r = strtobool(buf, &mirror); + r = kstrtobool(buf, &mirror); if (r) return r; diff --git a/drivers/video/fbdev/riva/fbdev.c b/drivers/video/fbdev/riva/fbdev.c index 644278146d3b..41edc6e79460 100644 --- a/drivers/video/fbdev/riva/fbdev.c +++ b/drivers/video/fbdev/riva/fbdev.c @@ -293,13 +293,7 @@ static int riva_bl_update_status(struct backlight_device *bd) { struct riva_par *par = bl_get_data(bd); U032 tmp_pcrt, tmp_pmc; - int level; - - if (bd->props.power != FB_BLANK_UNBLANK || - bd->props.fb_blank != FB_BLANK_UNBLANK) - level = 0; - else - level = bd->props.brightness; + int level = backlight_get_brightness(bd); tmp_pmc = NV_RD32(par->riva.PMC, 0x10F0) & 0x0000FFFF; tmp_pcrt = NV_RD32(par->riva.PCRTC0, 0x081C) & 0xFFFFFFFC; diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index 6a2cf754912d..ff4b1d583788 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1426,12 +1426,20 @@ static void rbio_update_error_bitmap(struct btrfs_raid_bio *rbio, struct bio *bi u32 bio_size = 0; struct bio_vec *bvec; struct bvec_iter_all iter_all; + int i; bio_for_each_segment_all(bvec, bio, iter_all) bio_size += bvec->bv_len; - bitmap_set(rbio->error_bitmap, total_sector_nr, - bio_size >> rbio->bioc->fs_info->sectorsize_bits); + /* + * Since we can have multiple bios touching the error_bitmap, we cannot + * call bitmap_set() without protection. + * + * Instead use set_bit() for each bit, as set_bit() itself is atomic. + */ + for (i = total_sector_nr; i < total_sector_nr + + (bio_size >> rbio->bioc->fs_info->sectorsize_bits); i++) + set_bit(i, rbio->error_bitmap); } /* Verify the data sectors at read time. */ @@ -1886,7 +1894,7 @@ pstripe: sector->uptodate = 1; } if (failb >= 0) { - ret = verify_one_sector(rbio, faila, sector_nr); + ret = verify_one_sector(rbio, failb, sector_nr); if (ret < 0) goto cleanup; diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c index e65e6b6600a7..d50182b6deec 100644 --- a/fs/btrfs/send.c +++ b/fs/btrfs/send.c @@ -8073,10 +8073,10 @@ long btrfs_ioctl_send(struct inode *inode, struct btrfs_ioctl_send_args *arg) /* * Check that we don't overflow at later allocations, we request * clone_sources_count + 1 items, and compare to unsigned long inside - * access_ok. + * access_ok. Also set an upper limit for allocation size so this can't + * easily exhaust memory. Max number of clone sources is about 200K. */ - if (arg->clone_sources_count > - ULONG_MAX / sizeof(struct clone_root) - 1) { + if (arg->clone_sources_count > SZ_8M / sizeof(struct clone_root)) { ret = -EINVAL; goto out; } diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index bcfef75b97da..4cdadf30163d 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -1600,7 +1600,7 @@ again: if (ret < 0) goto out; - while (1) { + while (search_start < search_end) { l = path->nodes[0]; slot = path->slots[0]; if (slot >= btrfs_header_nritems(l)) { @@ -1623,6 +1623,9 @@ again: if (key.type != BTRFS_DEV_EXTENT_KEY) goto next; + if (key.offset > search_end) + break; + if (key.offset > search_start) { hole_size = key.offset - search_start; dev_extent_hole_check(device, &search_start, &hole_size, @@ -1683,6 +1686,7 @@ next: else ret = 0; + ASSERT(max_hole_start + max_hole_size <= search_end); out: btrfs_free_path(path); *start = max_hole_start; diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c index 01a13de11832..da7bb9187b68 100644 --- a/fs/btrfs/zlib.c +++ b/fs/btrfs/zlib.c @@ -63,7 +63,7 @@ struct list_head *zlib_alloc_workspace(unsigned int level) workspacesize = max(zlib_deflate_workspacesize(MAX_WBITS, MAX_MEM_LEVEL), zlib_inflate_workspacesize()); - workspace->strm.workspace = kvmalloc(workspacesize, GFP_KERNEL); + workspace->strm.workspace = kvzalloc(workspacesize, GFP_KERNEL); workspace->level = level; workspace->buf = NULL; /* diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c index 8c74871e37c9..cac4083e387a 100644 --- a/fs/ceph/addr.c +++ b/fs/ceph/addr.c @@ -305,7 +305,7 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) struct inode *inode = rreq->inode; struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_fs_client *fsc = ceph_inode_to_client(inode); - struct ceph_osd_request *req; + struct ceph_osd_request *req = NULL; struct ceph_vino vino = ceph_vino(inode); struct iov_iter iter; struct page **pages; @@ -313,6 +313,11 @@ static void ceph_netfs_issue_read(struct netfs_io_subrequest *subreq) int err = 0; u64 len = subreq->len; + if (ceph_inode_is_shutdown(inode)) { + err = -EIO; + goto out; + } + if (ceph_has_inline_data(ci) && ceph_netfs_issue_op_inline(subreq)) return; @@ -563,6 +568,9 @@ static int writepage_nounlock(struct page *page, struct writeback_control *wbc) dout("writepage %p idx %lu\n", page, page->index); + if (ceph_inode_is_shutdown(inode)) + return -EIO; + /* verify this is a writeable snap context */ snapc = page_snap_context(page); if (!snapc) { @@ -1643,7 +1651,7 @@ int ceph_uninline_data(struct file *file) struct ceph_inode_info *ci = ceph_inode(inode); struct ceph_fs_client *fsc = ceph_inode_to_client(inode); struct ceph_osd_request *req = NULL; - struct ceph_cap_flush *prealloc_cf; + struct ceph_cap_flush *prealloc_cf = NULL; struct folio *folio = NULL; u64 inline_version = CEPH_INLINE_NONE; struct page *pages[1]; @@ -1657,6 +1665,11 @@ int ceph_uninline_data(struct file *file) dout("uninline_data %p %llx.%llx inline_version %llu\n", inode, ceph_vinop(inode), inline_version); + if (ceph_inode_is_shutdown(inode)) { + err = -EIO; + goto out; + } + if (inline_version == CEPH_INLINE_NONE) return 0; diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c index f75ad432f375..210e40037881 100644 --- a/fs/ceph/caps.c +++ b/fs/ceph/caps.c @@ -4078,6 +4078,7 @@ void ceph_handle_caps(struct ceph_mds_session *session, void *p, *end; struct cap_extra_info extra_info = {}; bool queue_trunc; + bool close_sessions = false; dout("handle_caps from mds%d\n", session->s_mds); @@ -4215,9 +4216,13 @@ void ceph_handle_caps(struct ceph_mds_session *session, realm = NULL; if (snaptrace_len) { down_write(&mdsc->snap_rwsem); - ceph_update_snap_trace(mdsc, snaptrace, - snaptrace + snaptrace_len, - false, &realm); + if (ceph_update_snap_trace(mdsc, snaptrace, + snaptrace + snaptrace_len, + false, &realm)) { + up_write(&mdsc->snap_rwsem); + close_sessions = true; + goto done; + } downgrade_write(&mdsc->snap_rwsem); } else { down_read(&mdsc->snap_rwsem); @@ -4277,6 +4282,11 @@ done_unlocked: iput(inode); out: ceph_put_string(extra_info.pool_ns); + + /* Defer closing the sessions after s_mutex lock being released */ + if (close_sessions) + ceph_mdsc_close_sessions(mdsc); + return; flush_cap_releases: diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 764598e1efd9..b5cff85925a1 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -2011,6 +2011,9 @@ static int ceph_zero_partial_object(struct inode *inode, loff_t zero = 0; int op; + if (ceph_inode_is_shutdown(inode)) + return -EIO; + if (!length) { op = offset ? CEPH_OSD_OP_DELETE : CEPH_OSD_OP_TRUNCATE; length = &zero; diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c index 26a0a8b9975e..e163f58ff3c5 100644 --- a/fs/ceph/mds_client.c +++ b/fs/ceph/mds_client.c @@ -806,6 +806,9 @@ static struct ceph_mds_session *register_session(struct ceph_mds_client *mdsc, { struct ceph_mds_session *s; + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) + return ERR_PTR(-EIO); + if (mds >= mdsc->mdsmap->possible_max_rank) return ERR_PTR(-EINVAL); @@ -1478,6 +1481,9 @@ static int __open_session(struct ceph_mds_client *mdsc, int mstate; int mds = session->s_mds; + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) + return -EIO; + /* wait for mds to go active? */ mstate = ceph_mdsmap_get_state(mdsc->mdsmap, mds); dout("open_session to mds%d (%s)\n", mds, @@ -2860,6 +2866,11 @@ static void __do_request(struct ceph_mds_client *mdsc, return; } + if (READ_ONCE(mdsc->fsc->mount_state) == CEPH_MOUNT_FENCE_IO) { + dout("do_request metadata corrupted\n"); + err = -EIO; + goto finish; + } if (req->r_timeout && time_after_eq(jiffies, req->r_started + req->r_timeout)) { dout("do_request timed out\n"); @@ -3245,6 +3256,7 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) u64 tid; int err, result; int mds = session->s_mds; + bool close_sessions = false; if (msg->front.iov_len < sizeof(*head)) { pr_err("mdsc_handle_reply got corrupt (short) reply\n"); @@ -3351,10 +3363,17 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg) realm = NULL; if (rinfo->snapblob_len) { down_write(&mdsc->snap_rwsem); - ceph_update_snap_trace(mdsc, rinfo->snapblob, + err = ceph_update_snap_trace(mdsc, rinfo->snapblob, rinfo->snapblob + rinfo->snapblob_len, le32_to_cpu(head->op) == CEPH_MDS_OP_RMSNAP, &realm); + if (err) { + up_write(&mdsc->snap_rwsem); + close_sessions = true; + if (err == -EIO) + ceph_msg_dump(msg); + goto out_err; + } downgrade_write(&mdsc->snap_rwsem); } else { down_read(&mdsc->snap_rwsem); @@ -3412,6 +3431,10 @@ out_err: req->r_end_latency, err); out: ceph_mdsc_put_request(req); + + /* Defer closing the sessions after s_mutex lock being released */ + if (close_sessions) + ceph_mdsc_close_sessions(mdsc); return; } @@ -5011,7 +5034,7 @@ static bool done_closing_sessions(struct ceph_mds_client *mdsc, int skipped) } /* - * called after sb is ro. + * called after sb is ro or when metadata corrupted. */ void ceph_mdsc_close_sessions(struct ceph_mds_client *mdsc) { @@ -5301,7 +5324,8 @@ static void mds_peer_reset(struct ceph_connection *con) struct ceph_mds_client *mdsc = s->s_mdsc; pr_warn("mds%d closed our session\n", s->s_mds); - send_mds_reconnect(mdsc, s); + if (READ_ONCE(mdsc->fsc->mount_state) != CEPH_MOUNT_FENCE_IO) + send_mds_reconnect(mdsc, s); } static void mds_dispatch(struct ceph_connection *con, struct ceph_msg *msg) diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c index e4151852184e..87007203f130 100644 --- a/fs/ceph/snap.c +++ b/fs/ceph/snap.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include <linux/ceph/ceph_debug.h> +#include <linux/fs.h> #include <linux/sort.h> #include <linux/slab.h> #include <linux/iversion.h> @@ -766,8 +767,10 @@ int ceph_update_snap_trace(struct ceph_mds_client *mdsc, struct ceph_snap_realm *realm; struct ceph_snap_realm *first_realm = NULL; struct ceph_snap_realm *realm_to_rebuild = NULL; + struct ceph_client *client = mdsc->fsc->client; int rebuild_snapcs; int err = -ENOMEM; + int ret; LIST_HEAD(dirty_realms); lockdep_assert_held_write(&mdsc->snap_rwsem); @@ -884,6 +887,27 @@ fail: if (first_realm) ceph_put_snap_realm(mdsc, first_realm); pr_err("%s error %d\n", __func__, err); + + /* + * When receiving a corrupted snap trace we don't know what + * exactly has happened in MDS side. And we shouldn't continue + * writing to OSD, which may corrupt the snapshot contents. + * + * Just try to blocklist this kclient and then this kclient + * must be remounted to continue after the corrupted metadata + * fixed in the MDS side. + */ + WRITE_ONCE(mdsc->fsc->mount_state, CEPH_MOUNT_FENCE_IO); + ret = ceph_monc_blocklist_add(&client->monc, &client->msgr.inst.addr); + if (ret) + pr_err("%s failed to blocklist %s: %d\n", __func__, + ceph_pr_addr(&client->msgr.inst.addr), ret); + + WARN(1, "%s: %s%sdo remount to continue%s", + __func__, ret ? "" : ceph_pr_addr(&client->msgr.inst.addr), + ret ? "" : " was blocklisted, ", + err == -EIO ? " after corrupted snaptrace is fixed" : ""); + return err; } @@ -984,6 +1008,7 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc, __le64 *split_inos = NULL, *split_realms = NULL; int i; int locked_rwsem = 0; + bool close_sessions = false; /* decode */ if (msg->front.iov_len < sizeof(*h)) @@ -1092,8 +1117,12 @@ skip_inode: * update using the provided snap trace. if we are deleting a * snap, we can avoid queueing cap_snaps. */ - ceph_update_snap_trace(mdsc, p, e, - op == CEPH_SNAP_OP_DESTROY, NULL); + if (ceph_update_snap_trace(mdsc, p, e, + op == CEPH_SNAP_OP_DESTROY, + NULL)) { + close_sessions = true; + goto bad; + } if (op == CEPH_SNAP_OP_SPLIT) /* we took a reference when we created the realm, above */ @@ -1112,6 +1141,9 @@ bad: out: if (locked_rwsem) up_write(&mdsc->snap_rwsem); + + if (close_sessions) + ceph_mdsc_close_sessions(mdsc); return; } diff --git a/fs/ceph/super.h b/fs/ceph/super.h index 0ed3be75bb9a..07c6906cda70 100644 --- a/fs/ceph/super.h +++ b/fs/ceph/super.h @@ -100,6 +100,17 @@ struct ceph_mount_options { char *mon_addr; }; +/* mount state */ +enum { + CEPH_MOUNT_MOUNTING, + CEPH_MOUNT_MOUNTED, + CEPH_MOUNT_UNMOUNTING, + CEPH_MOUNT_UNMOUNTED, + CEPH_MOUNT_SHUTDOWN, + CEPH_MOUNT_RECOVER, + CEPH_MOUNT_FENCE_IO, +}; + #define CEPH_ASYNC_CREATE_CONFLICT_BITS 8 struct ceph_fs_client { diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 22dfc1f8b4f1..b8d1cbadb689 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -3889,7 +3889,7 @@ uncached_fill_pages(struct TCP_Server_Info *server, rdata->got_bytes += result; } - return rdata->got_bytes > 0 && result != -ECONNABORTED ? + return result != -ECONNABORTED && rdata->got_bytes > 0 ? rdata->got_bytes : result; } @@ -4665,7 +4665,7 @@ readpages_fill_pages(struct TCP_Server_Info *server, rdata->got_bytes += result; } - return rdata->got_bytes > 0 && result != -ECONNABORTED ? + return result != -ECONNABORTED && rdata->got_bytes > 0 ? rdata->got_bytes : result; } diff --git a/fs/coredump.c b/fs/coredump.c index de78bde2991b..a25ecec9ca7c 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -838,6 +838,30 @@ static int __dump_skip(struct coredump_params *cprm, size_t nr) } } +int dump_emit(struct coredump_params *cprm, const void *addr, int nr) +{ + if (cprm->to_skip) { + if (!__dump_skip(cprm, cprm->to_skip)) + return 0; + cprm->to_skip = 0; + } + return __dump_emit(cprm, addr, nr); +} +EXPORT_SYMBOL(dump_emit); + +void dump_skip_to(struct coredump_params *cprm, unsigned long pos) +{ + cprm->to_skip = pos - cprm->pos; +} +EXPORT_SYMBOL(dump_skip_to); + +void dump_skip(struct coredump_params *cprm, size_t nr) +{ + cprm->to_skip += nr; +} +EXPORT_SYMBOL(dump_skip); + +#ifdef CONFIG_ELF_CORE static int dump_emit_page(struct coredump_params *cprm, struct page *page) { struct bio_vec bvec = { @@ -871,30 +895,6 @@ static int dump_emit_page(struct coredump_params *cprm, struct page *page) return 1; } -int dump_emit(struct coredump_params *cprm, const void *addr, int nr) -{ - if (cprm->to_skip) { - if (!__dump_skip(cprm, cprm->to_skip)) - return 0; - cprm->to_skip = 0; - } - return __dump_emit(cprm, addr, nr); -} -EXPORT_SYMBOL(dump_emit); - -void dump_skip_to(struct coredump_params *cprm, unsigned long pos) -{ - cprm->to_skip = pos - cprm->pos; -} -EXPORT_SYMBOL(dump_skip_to); - -void dump_skip(struct coredump_params *cprm, size_t nr) -{ - cprm->to_skip += nr; -} -EXPORT_SYMBOL(dump_skip); - -#ifdef CONFIG_ELF_CORE int dump_user_range(struct coredump_params *cprm, unsigned long start, unsigned long len) { diff --git a/fs/freevxfs/Kconfig b/fs/freevxfs/Kconfig index c05c71d57291..0e2fc08f7de4 100644 --- a/fs/freevxfs/Kconfig +++ b/fs/freevxfs/Kconfig @@ -8,7 +8,7 @@ config VXFS_FS of SCO UnixWare (and possibly others) and optionally available for Sunsoft Solaris, HP-UX and many other operating systems. However these particular OS implementations of vxfs may differ in on-disk - data endianess and/or superblock offset. The vxfs module has been + data endianness and/or superblock offset. The vxfs module has been tested with SCO UnixWare and HP-UX B.10.20 (pa-risc 1.1 arch.) Currently only readonly access is supported and VxFX versions 2, 3 and 4. Tests were performed with HP-UX VxFS version 3. diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index e35a0398db63..af1c49ae11b1 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -745,9 +745,7 @@ static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, page = pfn_swap_entry_to_page(swpent); } if (page) { - int mapcount = page_mapcount(page); - - if (mapcount >= 2) + if (page_mapcount(page) >= 2 || hugetlb_pmd_shared(pte)) mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); else mss->private_hugetlb += huge_page_size(hstate_vma(vma)); diff --git a/fs/squashfs/squashfs_fs.h b/fs/squashfs/squashfs_fs.h index b3fdc8212c5f..95f8e8901768 100644 --- a/fs/squashfs/squashfs_fs.h +++ b/fs/squashfs/squashfs_fs.h @@ -183,7 +183,7 @@ static inline int squashfs_block_size(__le32 raw) #define SQUASHFS_ID_BLOCK_BYTES(A) (SQUASHFS_ID_BLOCKS(A) *\ sizeof(u64)) /* xattr id lookup table defines */ -#define SQUASHFS_XATTR_BYTES(A) ((A) * sizeof(struct squashfs_xattr_id)) +#define SQUASHFS_XATTR_BYTES(A) (((u64) (A)) * sizeof(struct squashfs_xattr_id)) #define SQUASHFS_XATTR_BLOCK(A) (SQUASHFS_XATTR_BYTES(A) / \ SQUASHFS_METADATA_SIZE) diff --git a/fs/squashfs/squashfs_fs_sb.h b/fs/squashfs/squashfs_fs_sb.h index 659082e9e51d..72f6f4b37863 100644 --- a/fs/squashfs/squashfs_fs_sb.h +++ b/fs/squashfs/squashfs_fs_sb.h @@ -63,7 +63,7 @@ struct squashfs_sb_info { long long bytes_used; unsigned int inodes; unsigned int fragments; - int xattr_ids; + unsigned int xattr_ids; unsigned int ids; bool panic_on_errors; const struct squashfs_decompressor_thread_ops *thread_ops; diff --git a/fs/squashfs/xattr.h b/fs/squashfs/xattr.h index d8a270d3ac4c..f1a463d8bfa0 100644 --- a/fs/squashfs/xattr.h +++ b/fs/squashfs/xattr.h @@ -10,12 +10,12 @@ #ifdef CONFIG_SQUASHFS_XATTR extern __le64 *squashfs_read_xattr_id_table(struct super_block *, u64, - u64 *, int *); + u64 *, unsigned int *); extern int squashfs_xattr_lookup(struct super_block *, unsigned int, int *, unsigned int *, unsigned long long *); #else static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb, - u64 start, u64 *xattr_table_start, int *xattr_ids) + u64 start, u64 *xattr_table_start, unsigned int *xattr_ids) { struct squashfs_xattr_id_table *id_table; diff --git a/fs/squashfs/xattr_id.c b/fs/squashfs/xattr_id.c index 087cab8c78f4..b88d19e9581e 100644 --- a/fs/squashfs/xattr_id.c +++ b/fs/squashfs/xattr_id.c @@ -56,7 +56,7 @@ int squashfs_xattr_lookup(struct super_block *sb, unsigned int index, * Read uncompressed xattr id lookup table indexes from disk into memory */ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start, - u64 *xattr_table_start, int *xattr_ids) + u64 *xattr_table_start, unsigned int *xattr_ids) { struct squashfs_sb_info *msblk = sb->s_fs_info; unsigned int len, indexes; @@ -76,7 +76,7 @@ __le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start, /* Sanity check values */ /* there is always at least one xattr id */ - if (*xattr_ids == 0) + if (*xattr_ids <= 0) return ERR_PTR(-EINVAL); len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids); diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 9270cd87da3f..6470f67e63c4 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -263,7 +263,7 @@ struct vgic_dist { struct vgic_io_device dist_iodev; bool has_its; - bool save_its_tables_in_progress; + bool table_write_in_progress; /* * Contains the attributes and gpa of the LPI configuration table. diff --git a/include/linux/bpf.h b/include/linux/bpf.h index f7f24defccb8..35c18a98c21a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -899,8 +899,12 @@ enum bpf_cgroup_storage_type { /* The argument is a structure. */ #define BTF_FMODEL_STRUCT_ARG BIT(0) +/* The argument is signed. */ +#define BTF_FMODEL_SIGNED_ARG BIT(1) + struct btf_func_model { u8 ret_size; + u8 ret_flags; u8 nr_args; u8 arg_size[MAX_BPF_FUNC_ARGS]; u8 arg_flags[MAX_BPF_FUNC_ARGS]; @@ -939,7 +943,13 @@ struct btf_func_model { /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50 * bytes on x86. */ -#define BPF_MAX_TRAMP_LINKS 38 +enum { +#if defined(__s390x__) + BPF_MAX_TRAMP_LINKS = 27, +#else + BPF_MAX_TRAMP_LINKS = 38, +#endif +}; struct bpf_tramp_links { struct bpf_tramp_link *links[BPF_MAX_TRAMP_LINKS]; @@ -1836,7 +1846,7 @@ struct bpf_prog * __must_check bpf_prog_inc_not_zero(struct bpf_prog *prog); void bpf_prog_put(struct bpf_prog *prog); void bpf_prog_free_id(struct bpf_prog *prog); -void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock); +void bpf_map_free_id(struct bpf_map *map); struct btf_field *btf_record_find(const struct btf_record *rec, u32 offset, enum btf_field_type type); diff --git a/include/linux/btf.h b/include/linux/btf.h index 5f628f323442..49e0fe6d8274 100644 --- a/include/linux/btf.h +++ b/include/linux/btf.h @@ -73,6 +73,14 @@ #define KF_RCU (1 << 7) /* kfunc only takes rcu pointer arguments */ /* + * Tag marking a kernel function as a kfunc. This is meant to minimize the + * amount of copy-paste that kfunc authors have to include for correctness so + * as to avoid issues such as the compiler inlining or eliding either a static + * kfunc, or a global kfunc in an LTO build. + */ +#define __bpf_kfunc __used noinline + +/* * Return the name of the passed struct, if exists, or halt the build if for * example the structure gets renamed. In this way, developers have to revisit * the code using that structure name, and update it accordingly. @@ -236,6 +244,16 @@ static inline bool btf_type_is_small_int(const struct btf_type *t) return btf_type_is_int(t) && t->size <= sizeof(u64); } +static inline u8 btf_int_encoding(const struct btf_type *t) +{ + return BTF_INT_ENCODING(*(u32 *)(t + 1)); +} + +static inline bool btf_type_is_signed_int(const struct btf_type *t) +{ + return btf_type_is_int(t) && (btf_int_encoding(t) & BTF_INT_SIGNED); +} + static inline bool btf_type_is_enum(const struct btf_type *t) { return BTF_INFO_KIND(t->info) == BTF_KIND_ENUM; @@ -306,11 +324,6 @@ static inline u8 btf_int_offset(const struct btf_type *t) return BTF_INT_OFFSET(*(u32 *)(t + 1)); } -static inline u8 btf_int_encoding(const struct btf_type *t) -{ - return BTF_INT_ENCODING(*(u32 *)(t + 1)); -} - static inline bool btf_type_is_scalar(const struct btf_type *t) { return btf_type_is_int(t) || btf_type_is_enum(t); diff --git a/include/linux/ceph/libceph.h b/include/linux/ceph/libceph.h index 00af2c98da75..4497d0a6772c 100644 --- a/include/linux/ceph/libceph.h +++ b/include/linux/ceph/libceph.h @@ -99,16 +99,6 @@ struct ceph_options { #define CEPH_AUTH_NAME_DEFAULT "guest" -/* mount state */ -enum { - CEPH_MOUNT_MOUNTING, - CEPH_MOUNT_MOUNTED, - CEPH_MOUNT_UNMOUNTING, - CEPH_MOUNT_UNMOUNTED, - CEPH_MOUNT_SHUTDOWN, - CEPH_MOUNT_RECOVER, -}; - static inline unsigned long ceph_timeout_jiffies(unsigned long timeout) { return timeout ?: MAX_SCHEDULE_TIMEOUT; diff --git a/include/linux/efi.h b/include/linux/efi.h index 4b27519143f5..98598bd1d2fa 100644 --- a/include/linux/efi.h +++ b/include/linux/efi.h @@ -668,7 +668,8 @@ extern struct efi { #define EFI_RT_SUPPORTED_ALL 0x3fff -#define EFI_RT_SUPPORTED_TIME_SERVICES 0x000f +#define EFI_RT_SUPPORTED_TIME_SERVICES 0x0003 +#define EFI_RT_SUPPORTED_WAKEUP_SERVICES 0x000c #define EFI_RT_SUPPORTED_VARIABLE_SERVICES 0x0070 extern struct mm_struct efi_mm; diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 034b1106d022..e098f38422af 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -200,7 +200,7 @@ static inline void *kmap_local_pfn(unsigned long pfn) static inline void __kunmap_local(const void *addr) { #ifdef ARCH_HAS_FLUSH_ON_KUNMAP - kunmap_flush_on_unmap(addr); + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); #endif } @@ -227,7 +227,7 @@ static inline void *kmap_atomic_pfn(unsigned long pfn) static inline void __kunmap_atomic(const void *addr) { #ifdef ARCH_HAS_FLUSH_ON_KUNMAP - kunmap_flush_on_unmap(addr); + kunmap_flush_on_unmap(PTR_ALIGN_DOWN(addr, PAGE_SIZE)); #endif pagefault_enable(); if (IS_ENABLED(CONFIG_PREEMPT_RT)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 551834cd5299..db194e2ba69f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -7,6 +7,7 @@ #include <linux/fs.h> #include <linux/hugetlb_inline.h> #include <linux/cgroup.h> +#include <linux/page_ref.h> #include <linux/list.h> #include <linux/kref.h> #include <linux/pgtable.h> @@ -1187,6 +1188,18 @@ static inline __init void hugetlb_cma_reserve(int order) } #endif +#ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE +static inline bool hugetlb_pmd_shared(pte_t *pte) +{ + return page_count(virt_to_page(pte)) > 1; +} +#else +static inline bool hugetlb_pmd_shared(pte_t *pte) +{ + return false; +} +#endif + bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr); #ifndef __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d3c8203cab6c..85dc9b88ea37 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -1666,10 +1666,13 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio, static inline void mem_cgroup_track_foreign_dirty(struct folio *folio, struct bdi_writeback *wb) { + struct mem_cgroup *memcg; + if (mem_cgroup_disabled()) return; - if (unlikely(&folio_memcg(folio)->css != wb->memcg_css)) + memcg = folio_memcg(folio); + if (unlikely(memcg && &memcg->css != wb->memcg_css)) mem_cgroup_track_foreign_dirty_slowpath(folio, wb); } diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h index e0b104ea49b1..ecd3b5448fe9 100644 --- a/include/linux/mlx5/driver.h +++ b/include/linux/mlx5/driver.h @@ -572,6 +572,14 @@ struct mlx5_debugfs_entries { struct dentry *lag_debugfs; }; +enum mlx5_func_type { + MLX5_PF, + MLX5_VF, + MLX5_SF, + MLX5_HOST_PF, + MLX5_FUNC_TYPE_NUM, +}; + struct mlx5_ft_pool; struct mlx5_priv { /* IRQ table valid only for real pci devices PF or VF */ @@ -582,11 +590,10 @@ struct mlx5_priv { struct mlx5_nb pg_nb; struct workqueue_struct *pg_wq; struct xarray page_root_xa; - u32 fw_pages; atomic_t reg_pages; struct list_head free_list; - u32 vfs_pages; - u32 host_pf_pages; + u32 fw_pages; + u32 page_counters[MLX5_FUNC_TYPE_NUM]; u32 fw_pages_alloc_failed; u32 give_pages_dropped; u32 reclaim_pages_discard; diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index d5ef4c1fedd2..d9cdbc047b49 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -47,6 +47,7 @@ #include <uapi/linux/netdevice.h> #include <uapi/linux/if_bonding.h> #include <uapi/linux/pkt_cls.h> +#include <uapi/linux/netdev.h> #include <linux/hashtable.h> #include <linux/rbtree.h> #include <net/net_trackers.h> @@ -223,6 +224,7 @@ struct net_device_core_stats { #include <linux/static_key.h> extern struct static_key_false rps_needed; extern struct static_key_false rfs_needed; +extern struct cpumask rps_default_mask; #endif struct neighbour; @@ -1818,6 +1820,7 @@ enum netdev_ml_priv_type { * of Layer 2 headers. * * @flags: Interface flags (a la BSD) + * @xdp_features: XDP capability supported by the device * @priv_flags: Like 'flags' but invisible to userspace, * see if.h for the definitions * @gflags: Global flags ( kept as legacy ) @@ -2059,6 +2062,7 @@ struct net_device { /* Read-mostly cache-line for fast-path access */ unsigned int flags; + xdp_features_t xdp_features; unsigned long long priv_flags; const struct net_device_ops *netdev_ops; const struct xdp_metadata_ops *xdp_metadata_ops; @@ -2845,6 +2849,7 @@ enum netdev_cmd { NETDEV_OFFLOAD_XSTATS_DISABLE, NETDEV_OFFLOAD_XSTATS_REPORT_USED, NETDEV_OFFLOAD_XSTATS_REPORT_DELTA, + NETDEV_XDP_FEAT_CHANGE, }; const char *netdev_cmd_to_name(enum netdev_cmd cmd); diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h index 50caa117cb62..bb15c9234e21 100644 --- a/include/linux/nvmem-provider.h +++ b/include/linux/nvmem-provider.h @@ -70,7 +70,6 @@ struct nvmem_keepout { * @word_size: Minimum read/write access granularity. * @stride: Minimum read/write access stride. * @priv: User context passed to read/write callbacks. - * @wp-gpio: Write protect pin * @ignore_wp: Write Protect pin is managed by the provider. * * Note: A default "nvmem<id>" name will be assigned to the device if @@ -85,7 +84,6 @@ struct nvmem_config { const char *name; int id; struct module *owner; - struct gpio_desc *wp_gpio; const struct nvmem_cell_info *cells; int ncells; const struct nvmem_keepout *keepout; diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index c3df3b55da97..47ab28a37f2f 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -1243,7 +1243,7 @@ static inline void consume_skb(struct sk_buff *skb) void __consume_stateless_skb(struct sk_buff *skb); void __kfree_skb(struct sk_buff *skb); -extern struct kmem_cache *skbuff_head_cache; +extern struct kmem_cache *skbuff_cache; void kfree_skb_partial(struct sk_buff *skb, bool head_stolen); bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from, diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 1341f7d62da4..be48f1cb1878 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -476,6 +476,15 @@ extern int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, #define atomic_dec_and_lock_irqsave(atomic, lock, flags) \ __cond_lock(lock, _atomic_dec_and_lock_irqsave(atomic, lock, &(flags))) +extern int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock); +#define atomic_dec_and_raw_lock(atomic, lock) \ + __cond_lock(lock, _atomic_dec_and_raw_lock(atomic, lock)) + +extern int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, + unsigned long *flags); +#define atomic_dec_and_raw_lock_irqsave(atomic, lock, flags) \ + __cond_lock(lock, _atomic_dec_and_raw_lock_irqsave(atomic, lock, &(flags))) + int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *lock_mask, size_t max_size, unsigned int cpu_mult, gfp_t gfp, const char *name, diff --git a/include/linux/string_helpers.h b/include/linux/string_helpers.h index 8530c7328269..fae6beaaa217 100644 --- a/include/linux/string_helpers.h +++ b/include/linux/string_helpers.h @@ -11,6 +11,11 @@ struct device; struct file; struct task_struct; +static inline bool string_is_terminated(const char *s, int len) +{ + return memchr(s, '\0', len) ? true : false; +} + /* Descriptions of the types of units to * print in */ enum string_size_units { diff --git a/include/linux/swap.h b/include/linux/swap.h index 2787b84eaf12..0ceed49516ad 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -418,8 +418,7 @@ extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order, extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, unsigned long nr_pages, gfp_t gfp_mask, - unsigned int reclaim_options, - nodemask_t *nodemask); + unsigned int reclaim_options); extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, gfp_t gfp_mask, bool noswap, pg_data_t *pgdat, diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h index 8d773b042c85..400f8a7d0c3f 100644 --- a/include/net/bluetooth/hci.h +++ b/include/net/bluetooth/hci.h @@ -2156,7 +2156,7 @@ struct hci_cp_le_big_create_sync { __u8 mse; __le16 timeout; __u8 num_bis; - __u8 bis[0]; + __u8 bis[]; } __packed; #define HCI_OP_LE_BIG_TERM_SYNC 0x206c @@ -2174,7 +2174,7 @@ struct hci_cp_le_setup_iso_path { __le16 codec_vid; __u8 delay[3]; __u8 codec_cfg_len; - __u8 codec_cfg[0]; + __u8 codec_cfg[]; } __packed; struct hci_rp_le_setup_iso_path { diff --git a/include/net/bluetooth/mgmt.h b/include/net/bluetooth/mgmt.h index 743f6f59dff8..e18a927669c0 100644 --- a/include/net/bluetooth/mgmt.h +++ b/include/net/bluetooth/mgmt.h @@ -109,6 +109,8 @@ struct mgmt_rp_read_index_list { #define MGMT_SETTING_STATIC_ADDRESS 0x00008000 #define MGMT_SETTING_PHY_CONFIGURATION 0x00010000 #define MGMT_SETTING_WIDEBAND_SPEECH 0x00020000 +#define MGMT_SETTING_CIS_CENTRAL 0x00040000 +#define MGMT_SETTING_CIS_PERIPHERAL 0x00080000 #define MGMT_OP_READ_INFO 0x0004 #define MGMT_READ_INFO_SIZE 0 diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h index 6a2019aaa464..7bbab8f2b73d 100644 --- a/include/net/netfilter/nf_conntrack.h +++ b/include/net/netfilter/nf_conntrack.h @@ -362,6 +362,10 @@ static inline struct nf_conntrack_net *nf_ct_pernet(const struct net *net) return net_generic(net, nf_conntrack_net_id); } +int nf_ct_skb_network_trim(struct sk_buff *skb, int family); +int nf_ct_handle_fragments(struct net *net, struct sk_buff *skb, + u16 zone, u8 family, u8 *proto, u16 *mru); + #define NF_CT_STAT_INC(net, count) __this_cpu_inc((net)->ct.stat->count) #define NF_CT_STAT_INC_ATOMIC(net, count) this_cpu_inc((net)->ct.stat->count) #define NF_CT_STAT_ADD_ATOMIC(net, count, v) this_cpu_add((net)->ct.stat->count, (v)) diff --git a/include/net/xdp.h b/include/net/xdp.h index 91292aa13bc0..d517bfac937b 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -7,6 +7,7 @@ #define __LINUX_NET_XDP_H__ #include <linux/skbuff.h> /* skb_shared_info */ +#include <uapi/linux/netdev.h> /** * DOC: XDP RX-queue information @@ -43,6 +44,8 @@ enum xdp_mem_type { MEM_TYPE_MAX, }; +typedef u32 xdp_features_t; + /* XDP flags for ndo_xdp_xmit */ #define XDP_XMIT_FLUSH (1U << 0) /* doorbell signal consumer */ #define XDP_XMIT_FLAGS_MASK XDP_XMIT_FLUSH @@ -425,9 +428,21 @@ MAX_XDP_METADATA_KFUNC, #ifdef CONFIG_NET u32 bpf_xdp_metadata_kfunc_id(int id); bool bpf_dev_bound_kfunc_id(u32 btf_id); +void xdp_features_set_redirect_target(struct net_device *dev, bool support_sg); +void xdp_features_clear_redirect_target(struct net_device *dev); #else static inline u32 bpf_xdp_metadata_kfunc_id(int id) { return 0; } static inline bool bpf_dev_bound_kfunc_id(u32 btf_id) { return false; } + +static inline void +xdp_features_set_redirect_target(struct net_device *dev, bool support_sg) +{ +} + +static inline void +xdp_features_clear_redirect_target(struct net_device *dev) +{ +} #endif #endif /* __LINUX_NET_XDP_H__ */ diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h index d7bb4acf4580..c3c0b0aa8381 100644 --- a/include/trace/events/rxrpc.h +++ b/include/trace/events/rxrpc.h @@ -1118,9 +1118,9 @@ TRACE_EVENT(rxrpc_tx_data, TRACE_EVENT(rxrpc_tx_ack, TP_PROTO(unsigned int call, rxrpc_serial_t serial, rxrpc_seq_t ack_first, rxrpc_serial_t ack_serial, - u8 reason, u8 n_acks), + u8 reason, u8 n_acks, u16 rwind), - TP_ARGS(call, serial, ack_first, ack_serial, reason, n_acks), + TP_ARGS(call, serial, ack_first, ack_serial, reason, n_acks, rwind), TP_STRUCT__entry( __field(unsigned int, call) @@ -1129,6 +1129,7 @@ TRACE_EVENT(rxrpc_tx_ack, __field(rxrpc_serial_t, ack_serial) __field(u8, reason) __field(u8, n_acks) + __field(u16, rwind) ), TP_fast_assign( @@ -1138,15 +1139,17 @@ TRACE_EVENT(rxrpc_tx_ack, __entry->ack_serial = ack_serial; __entry->reason = reason; __entry->n_acks = n_acks; + __entry->rwind = rwind; ), - TP_printk(" c=%08x ACK %08x %s f=%08x r=%08x n=%u", + TP_printk(" c=%08x ACK %08x %s f=%08x r=%08x n=%u rw=%u", __entry->call, __entry->serial, __print_symbolic(__entry->reason, rxrpc_ack_names), __entry->ack_first, __entry->ack_serial, - __entry->n_acks) + __entry->n_acks, + __entry->rwind) ); TRACE_EVENT(rxrpc_receive, diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index ba0f0cfb5e42..17afd2b35ee5 100644 --- a/include/uapi/linux/bpf.h +++ b/include/uapi/linux/bpf.h @@ -2801,7 +2801,7 @@ union bpf_attr { * * long bpf_perf_prog_read_value(struct bpf_perf_event_data *ctx, struct bpf_perf_event_value *buf, u32 buf_size) * Description - * For en eBPF program attached to a perf event, retrieve the + * For an eBPF program attached to a perf event, retrieve the * value of the event counter associated to *ctx* and store it in * the structure pointed by *buf* and of size *buf_size*. Enabled * and running times are also stored in the structure (see diff --git a/include/uapi/linux/ip.h b/include/uapi/linux/ip.h index 874a92349bf5..283dec7e3645 100644 --- a/include/uapi/linux/ip.h +++ b/include/uapi/linux/ip.h @@ -18,6 +18,7 @@ #ifndef _UAPI_LINUX_IP_H #define _UAPI_LINUX_IP_H #include <linux/types.h> +#include <linux/stddef.h> #include <asm/byteorder.h> #define IPTOS_TOS_MASK 0x1E diff --git a/include/uapi/linux/ipv6.h b/include/uapi/linux/ipv6.h index 81f4243bebb1..53326dfc59ec 100644 --- a/include/uapi/linux/ipv6.h +++ b/include/uapi/linux/ipv6.h @@ -4,6 +4,7 @@ #include <linux/libc-compat.h> #include <linux/types.h> +#include <linux/stddef.h> #include <linux/in6.h> #include <asm/byteorder.h> diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h new file mode 100644 index 000000000000..9ee459872600 --- /dev/null +++ b/include/uapi/linux/netdev.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/netdev.yaml */ +/* YNL-GEN uapi header */ + +#ifndef _UAPI_LINUX_NETDEV_H +#define _UAPI_LINUX_NETDEV_H + +#define NETDEV_FAMILY_NAME "netdev" +#define NETDEV_FAMILY_VERSION 1 + +/** + * enum netdev_xdp_act + * @NETDEV_XDP_ACT_BASIC: XDP feautues set supported by all drivers + * (XDP_ABORTED, XDP_DROP, XDP_PASS, XDP_TX) + * @NETDEV_XDP_ACT_REDIRECT: The netdev supports XDP_REDIRECT + * @NETDEV_XDP_ACT_NDO_XMIT: This feature informs if netdev implements + * ndo_xdp_xmit callback. + * @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP + * in zero copy mode. + * @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw + * oflloading. + * @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear + * XDP buffer support in the driver napi callback. + * @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements + * non-linear XDP buffer support in ndo_xdp_xmit callback. + */ +enum netdev_xdp_act { + NETDEV_XDP_ACT_BASIC = 1, + NETDEV_XDP_ACT_REDIRECT = 2, + NETDEV_XDP_ACT_NDO_XMIT = 4, + NETDEV_XDP_ACT_XSK_ZEROCOPY = 8, + NETDEV_XDP_ACT_HW_OFFLOAD = 16, + NETDEV_XDP_ACT_RX_SG = 32, + NETDEV_XDP_ACT_NDO_XMIT_SG = 64, +}; + +enum { + NETDEV_A_DEV_IFINDEX = 1, + NETDEV_A_DEV_PAD, + NETDEV_A_DEV_XDP_FEATURES, + + __NETDEV_A_DEV_MAX, + NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) +}; + +enum { + NETDEV_CMD_DEV_GET = 1, + NETDEV_CMD_DEV_ADD_NTF, + NETDEV_CMD_DEV_DEL_NTF, + NETDEV_CMD_DEV_CHANGE_NTF, + + __NETDEV_CMD_MAX, + NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) +}; + +#define NETDEV_MCGRP_MGMT "mgmt" + +#endif /* _UAPI_LINUX_NETDEV_H */ diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c index 5b3cb8068b4d..740bdb045b14 100644 --- a/kernel/bpf/btf.c +++ b/kernel/bpf/btf.c @@ -6453,6 +6453,18 @@ static int __get_type_size(struct btf *btf, u32 btf_id, return -EINVAL; } +static u8 __get_type_fmodel_flags(const struct btf_type *t) +{ + u8 flags = 0; + + if (__btf_type_is_struct(t)) + flags |= BTF_FMODEL_STRUCT_ARG; + if (btf_type_is_signed_int(t)) + flags |= BTF_FMODEL_SIGNED_ARG; + + return flags; +} + int btf_distill_func_proto(struct bpf_verifier_log *log, struct btf *btf, const struct btf_type *func, @@ -6473,6 +6485,7 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, m->arg_flags[i] = 0; } m->ret_size = 8; + m->ret_flags = 0; m->nr_args = MAX_BPF_FUNC_REG_ARGS; return 0; } @@ -6492,6 +6505,7 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, return -EINVAL; } m->ret_size = ret; + m->ret_flags = __get_type_fmodel_flags(t); for (i = 0; i < nargs; i++) { if (i == nargs - 1 && args[i].type == 0) { @@ -6516,7 +6530,7 @@ int btf_distill_func_proto(struct bpf_verifier_log *log, return -EINVAL; } m->arg_size[i] = ret; - m->arg_flags[i] = __btf_type_is_struct(t) ? BTF_FMODEL_STRUCT_ARG : 0; + m->arg_flags[i] = __get_type_fmodel_flags(t); } m->nr_args = nargs; return 0; diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index e0b2d016f0bf..d2110c1f6fa6 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -361,7 +361,7 @@ static int cpu_map_kthread_run(void *data) /* Support running another XDP prog on this CPU */ nframes = cpu_map_bpf_prog_run(rcpu, frames, xdp_n, &stats, &list); if (nframes) { - m = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, skbs); + m = kmem_cache_alloc_bulk(skbuff_cache, gfp, nframes, skbs); if (unlikely(m == 0)) { for (i = 0; i < nframes; i++) skbs[i] = NULL; /* effect: xdp_return_frame */ diff --git a/kernel/bpf/cpumask.c b/kernel/bpf/cpumask.c index 25355a0a367a..52b981512a35 100644 --- a/kernel/bpf/cpumask.c +++ b/kernel/bpf/cpumask.c @@ -48,10 +48,13 @@ __diag_ignore_all("-Wmissing-prototypes", * bpf_cpumask_create() allocates memory using the BPF memory allocator, and * will not block. It may return NULL if no memory is available. */ -struct bpf_cpumask *bpf_cpumask_create(void) +__bpf_kfunc struct bpf_cpumask *bpf_cpumask_create(void) { struct bpf_cpumask *cpumask; + /* cpumask must be the first element so struct bpf_cpumask be cast to struct cpumask. */ + BUILD_BUG_ON(offsetof(struct bpf_cpumask, cpumask) != 0); + cpumask = bpf_mem_alloc(&bpf_cpumask_ma, sizeof(*cpumask)); if (!cpumask) return NULL; @@ -71,7 +74,7 @@ struct bpf_cpumask *bpf_cpumask_create(void) * must either be embedded in a map as a kptr, or freed with * bpf_cpumask_release(). */ -struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) +__bpf_kfunc struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) { refcount_inc(&cpumask->usage); return cpumask; @@ -87,7 +90,7 @@ struct bpf_cpumask *bpf_cpumask_acquire(struct bpf_cpumask *cpumask) * kptr, or freed with bpf_cpumask_release(). This function may return NULL if * no BPF cpumask was found in the specified map value. */ -struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) +__bpf_kfunc struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) { struct bpf_cpumask *cpumask; @@ -113,7 +116,7 @@ struct bpf_cpumask *bpf_cpumask_kptr_get(struct bpf_cpumask **cpumaskp) * reference of the BPF cpumask has been released, it is subsequently freed in * an RCU callback in the BPF memory allocator. */ -void bpf_cpumask_release(struct bpf_cpumask *cpumask) +__bpf_kfunc void bpf_cpumask_release(struct bpf_cpumask *cpumask) { if (!cpumask) return; @@ -132,7 +135,7 @@ void bpf_cpumask_release(struct bpf_cpumask *cpumask) * Find the index of the first nonzero bit of the cpumask. A struct bpf_cpumask * pointer may be safely passed to this function. */ -u32 bpf_cpumask_first(const struct cpumask *cpumask) +__bpf_kfunc u32 bpf_cpumask_first(const struct cpumask *cpumask) { return cpumask_first(cpumask); } @@ -145,7 +148,7 @@ u32 bpf_cpumask_first(const struct cpumask *cpumask) * Find the index of the first unset bit of the cpumask. A struct bpf_cpumask * pointer may be safely passed to this function. */ -u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) +__bpf_kfunc u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) { return cpumask_first_zero(cpumask); } @@ -155,7 +158,7 @@ u32 bpf_cpumask_first_zero(const struct cpumask *cpumask) * @cpu: The CPU to be set in the cpumask. * @cpumask: The BPF cpumask in which a bit is being set. */ -void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) +__bpf_kfunc void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) return; @@ -168,7 +171,7 @@ void bpf_cpumask_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) * @cpu: The CPU to be cleared from the cpumask. * @cpumask: The BPF cpumask in which a bit is being cleared. */ -void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) +__bpf_kfunc void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) return; @@ -185,7 +188,7 @@ void bpf_cpumask_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) * * true - @cpu is set in the cpumask * * false - @cpu was not set in the cpumask, or @cpu is an invalid cpu. */ -bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) +__bpf_kfunc bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) { if (!cpu_valid(cpu)) return false; @@ -202,7 +205,7 @@ bool bpf_cpumask_test_cpu(u32 cpu, const struct cpumask *cpumask) * * true - @cpu is set in the cpumask * * false - @cpu was not set in the cpumask, or @cpu is invalid. */ -bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) +__bpf_kfunc bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) return false; @@ -220,7 +223,7 @@ bool bpf_cpumask_test_and_set_cpu(u32 cpu, struct bpf_cpumask *cpumask) * * true - @cpu is set in the cpumask * * false - @cpu was not set in the cpumask, or @cpu is invalid. */ -bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) +__bpf_kfunc bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) { if (!cpu_valid(cpu)) return false; @@ -232,7 +235,7 @@ bool bpf_cpumask_test_and_clear_cpu(u32 cpu, struct bpf_cpumask *cpumask) * bpf_cpumask_setall() - Set all of the bits in a BPF cpumask. * @cpumask: The BPF cpumask having all of its bits set. */ -void bpf_cpumask_setall(struct bpf_cpumask *cpumask) +__bpf_kfunc void bpf_cpumask_setall(struct bpf_cpumask *cpumask) { cpumask_setall((struct cpumask *)cpumask); } @@ -241,7 +244,7 @@ void bpf_cpumask_setall(struct bpf_cpumask *cpumask) * bpf_cpumask_clear() - Clear all of the bits in a BPF cpumask. * @cpumask: The BPF cpumask being cleared. */ -void bpf_cpumask_clear(struct bpf_cpumask *cpumask) +__bpf_kfunc void bpf_cpumask_clear(struct bpf_cpumask *cpumask) { cpumask_clear((struct cpumask *)cpumask); } @@ -258,9 +261,9 @@ void bpf_cpumask_clear(struct bpf_cpumask *cpumask) * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -bool bpf_cpumask_and(struct bpf_cpumask *dst, - const struct cpumask *src1, - const struct cpumask *src2) +__bpf_kfunc bool bpf_cpumask_and(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) { return cpumask_and((struct cpumask *)dst, src1, src2); } @@ -273,9 +276,9 @@ bool bpf_cpumask_and(struct bpf_cpumask *dst, * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -void bpf_cpumask_or(struct bpf_cpumask *dst, - const struct cpumask *src1, - const struct cpumask *src2) +__bpf_kfunc void bpf_cpumask_or(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) { cpumask_or((struct cpumask *)dst, src1, src2); } @@ -288,9 +291,9 @@ void bpf_cpumask_or(struct bpf_cpumask *dst, * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -void bpf_cpumask_xor(struct bpf_cpumask *dst, - const struct cpumask *src1, - const struct cpumask *src2) +__bpf_kfunc void bpf_cpumask_xor(struct bpf_cpumask *dst, + const struct cpumask *src1, + const struct cpumask *src2) { cpumask_xor((struct cpumask *)dst, src1, src2); } @@ -306,7 +309,7 @@ void bpf_cpumask_xor(struct bpf_cpumask *dst, * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) +__bpf_kfunc bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_equal(src1, src2); } @@ -322,7 +325,7 @@ bool bpf_cpumask_equal(const struct cpumask *src1, const struct cpumask *src2) * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *src2) +__bpf_kfunc bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_intersects(src1, src2); } @@ -338,7 +341,7 @@ bool bpf_cpumask_intersects(const struct cpumask *src1, const struct cpumask *sr * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) +__bpf_kfunc bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_subset(src1, src2); } @@ -353,7 +356,7 @@ bool bpf_cpumask_subset(const struct cpumask *src1, const struct cpumask *src2) * * A struct bpf_cpumask pointer may be safely passed to @cpumask. */ -bool bpf_cpumask_empty(const struct cpumask *cpumask) +__bpf_kfunc bool bpf_cpumask_empty(const struct cpumask *cpumask) { return cpumask_empty(cpumask); } @@ -368,7 +371,7 @@ bool bpf_cpumask_empty(const struct cpumask *cpumask) * * A struct bpf_cpumask pointer may be safely passed to @cpumask. */ -bool bpf_cpumask_full(const struct cpumask *cpumask) +__bpf_kfunc bool bpf_cpumask_full(const struct cpumask *cpumask) { return cpumask_full(cpumask); } @@ -380,7 +383,7 @@ bool bpf_cpumask_full(const struct cpumask *cpumask) * * A struct bpf_cpumask pointer may be safely passed to @src. */ -void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) +__bpf_kfunc void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) { cpumask_copy((struct cpumask *)dst, src); } @@ -395,7 +398,7 @@ void bpf_cpumask_copy(struct bpf_cpumask *dst, const struct cpumask *src) * * A struct bpf_cpumask pointer may be safely passed to @src. */ -u32 bpf_cpumask_any(const struct cpumask *cpumask) +__bpf_kfunc u32 bpf_cpumask_any(const struct cpumask *cpumask) { return cpumask_any(cpumask); } @@ -412,7 +415,7 @@ u32 bpf_cpumask_any(const struct cpumask *cpumask) * * struct bpf_cpumask pointers may be safely passed to @src1 and @src2. */ -u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) +__bpf_kfunc u32 bpf_cpumask_any_and(const struct cpumask *src1, const struct cpumask *src2) { return cpumask_any_and(src1, src2); } diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index d01e4c55b376..2675fefc6cb6 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -474,7 +474,11 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_frame *xdpf, { int err; - if (!dev->netdev_ops->ndo_xdp_xmit) + if (!(dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT)) + return -EOPNOTSUPP; + + if (unlikely(!(dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT_SG) && + xdp_frame_has_frags(xdpf))) return -EOPNOTSUPP; err = xdp_ok_fwd_dev(dev, xdp_get_frame_len(xdpf)); @@ -532,8 +536,14 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_frame *xdpf, static bool is_valid_dst(struct bpf_dtab_netdev *obj, struct xdp_frame *xdpf) { - if (!obj || - !obj->dev->netdev_ops->ndo_xdp_xmit) + if (!obj) + return false; + + if (!(obj->dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT)) + return false; + + if (unlikely(!(obj->dev->xdp_features & NETDEV_XDP_ACT_NDO_XMIT_SG) && + xdp_frame_has_frags(xdpf))) return false; if (xdp_ok_fwd_dev(obj->dev, xdp_get_frame_len(xdpf))) diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 458db2db2f81..2dae44581922 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1776,7 +1776,7 @@ __diag_push(); __diag_ignore_all("-Wmissing-prototypes", "Global functions as their definitions will be in vmlinux BTF"); -void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) +__bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) { struct btf_struct_meta *meta = meta__ign; u64 size = local_type_id__k; @@ -1790,7 +1790,7 @@ void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign) return p; } -void bpf_obj_drop_impl(void *p__alloc, void *meta__ign) +__bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign) { struct btf_struct_meta *meta = meta__ign; void *p = p__alloc; @@ -1811,12 +1811,12 @@ static void __bpf_list_add(struct bpf_list_node *node, struct bpf_list_head *hea tail ? list_add_tail(n, h) : list_add(n, h); } -void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) +__bpf_kfunc void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) { return __bpf_list_add(node, head, false); } -void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) +__bpf_kfunc void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) { return __bpf_list_add(node, head, true); } @@ -1834,12 +1834,12 @@ static struct bpf_list_node *__bpf_list_del(struct bpf_list_head *head, bool tai return (struct bpf_list_node *)n; } -struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) +__bpf_kfunc struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) { return __bpf_list_del(head, false); } -struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) +__bpf_kfunc struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) { return __bpf_list_del(head, true); } @@ -1850,7 +1850,7 @@ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) * bpf_task_release(). * @p: The task on which a reference is being acquired. */ -struct task_struct *bpf_task_acquire(struct task_struct *p) +__bpf_kfunc struct task_struct *bpf_task_acquire(struct task_struct *p) { return get_task_struct(p); } @@ -1861,7 +1861,7 @@ struct task_struct *bpf_task_acquire(struct task_struct *p) * released by calling bpf_task_release(). * @p: The task on which a reference is being acquired. */ -struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) +__bpf_kfunc struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) { /* For the time being this function returns NULL, as it's not currently * possible to safely acquire a reference to a task with RCU protection @@ -1913,7 +1913,7 @@ struct task_struct *bpf_task_acquire_not_zero(struct task_struct *p) * be released by calling bpf_task_release(). * @pp: A pointer to a task kptr on which a reference is being acquired. */ -struct task_struct *bpf_task_kptr_get(struct task_struct **pp) +__bpf_kfunc struct task_struct *bpf_task_kptr_get(struct task_struct **pp) { /* We must return NULL here until we have clarity on how to properly * leverage RCU for ensuring a task's lifetime. See the comment above @@ -1926,7 +1926,7 @@ struct task_struct *bpf_task_kptr_get(struct task_struct **pp) * bpf_task_release - Release the reference acquired on a task. * @p: The task on which a reference is being released. */ -void bpf_task_release(struct task_struct *p) +__bpf_kfunc void bpf_task_release(struct task_struct *p) { if (!p) return; @@ -1941,7 +1941,7 @@ void bpf_task_release(struct task_struct *p) * calling bpf_cgroup_release(). * @cgrp: The cgroup on which a reference is being acquired. */ -struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp) +__bpf_kfunc struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp) { cgroup_get(cgrp); return cgrp; @@ -1953,7 +1953,7 @@ struct cgroup *bpf_cgroup_acquire(struct cgroup *cgrp) * be released by calling bpf_cgroup_release(). * @cgrpp: A pointer to a cgroup kptr on which a reference is being acquired. */ -struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp) +__bpf_kfunc struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp) { struct cgroup *cgrp; @@ -1985,7 +1985,7 @@ struct cgroup *bpf_cgroup_kptr_get(struct cgroup **cgrpp) * drops to 0. * @cgrp: The cgroup on which a reference is being released. */ -void bpf_cgroup_release(struct cgroup *cgrp) +__bpf_kfunc void bpf_cgroup_release(struct cgroup *cgrp) { if (!cgrp) return; @@ -2000,7 +2000,7 @@ void bpf_cgroup_release(struct cgroup *cgrp) * @cgrp: The cgroup for which we're performing a lookup. * @level: The level of ancestor to look up. */ -struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) +__bpf_kfunc struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) { struct cgroup *ancestor; @@ -2019,7 +2019,7 @@ struct cgroup *bpf_cgroup_ancestor(struct cgroup *cgrp, int level) * stored in a map, or released with bpf_task_release(). * @pid: The pid of the task being looked up. */ -struct task_struct *bpf_task_from_pid(s32 pid) +__bpf_kfunc struct task_struct *bpf_task_from_pid(s32 pid) { struct task_struct *p; @@ -2032,22 +2032,22 @@ struct task_struct *bpf_task_from_pid(s32 pid) return p; } -void *bpf_cast_to_kern_ctx(void *obj) +__bpf_kfunc void *bpf_cast_to_kern_ctx(void *obj) { return obj; } -void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k) +__bpf_kfunc void *bpf_rdonly_cast(void *obj__ign, u32 btf_id__k) { return obj__ign; } -void bpf_rcu_read_lock(void) +__bpf_kfunc void bpf_rcu_read_lock(void) { rcu_read_lock(); } -void bpf_rcu_read_unlock(void) +__bpf_kfunc void bpf_rcu_read_unlock(void) { rcu_read_unlock(); } diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c index 88aae38fde66..0c85e06f7ea7 100644 --- a/kernel/bpf/offload.c +++ b/kernel/bpf/offload.c @@ -136,7 +136,7 @@ static void __bpf_map_offload_destroy(struct bpf_offloaded_map *offmap) { WARN_ON(bpf_map_offload_ndo(offmap, BPF_OFFLOAD_MAP_FREE)); /* Make sure BPF_MAP_GET_NEXT_ID can't find this dead map */ - bpf_map_free_id(&offmap->map, true); + bpf_map_free_id(&offmap->map); list_del_init(&offmap->offloads); offmap->netdev = NULL; } diff --git a/kernel/bpf/preload/bpf_preload_kern.c b/kernel/bpf/preload/bpf_preload_kern.c index 5106b5372f0c..b56f9f3314fd 100644 --- a/kernel/bpf/preload/bpf_preload_kern.c +++ b/kernel/bpf/preload/bpf_preload_kern.c @@ -3,7 +3,11 @@ #include <linux/init.h> #include <linux/module.h> #include "bpf_preload.h" -#include "iterators/iterators.lskel.h" +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ +#include "iterators/iterators.lskel-little-endian.h" +#else +#include "iterators/iterators.lskel-big-endian.h" +#endif static struct bpf_link *maps_link, *progs_link; static struct iterators_bpf *skel; diff --git a/kernel/bpf/preload/iterators/Makefile b/kernel/bpf/preload/iterators/Makefile index 6762b1260f2f..8937dc6bc8d0 100644 --- a/kernel/bpf/preload/iterators/Makefile +++ b/kernel/bpf/preload/iterators/Makefile @@ -35,20 +35,22 @@ endif .PHONY: all clean -all: iterators.lskel.h +all: iterators.lskel-little-endian.h + +big: iterators.lskel-big-endian.h clean: $(call msg,CLEAN) $(Q)rm -rf $(OUTPUT) iterators -iterators.lskel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL) +iterators.lskel-%.h: $(OUTPUT)/%/iterators.bpf.o | $(BPFTOOL) $(call msg,GEN-SKEL,$@) $(Q)$(BPFTOOL) gen skeleton -L $< > $@ - -$(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT) +$(OUTPUT)/%/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT) $(call msg,BPF,$@) - $(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES) \ + $(Q)mkdir -p $(@D) + $(Q)$(CLANG) -g -O2 -target bpf -m$* $(INCLUDES) \ -c $(filter %.c,$^) -o $@ && \ $(LLVM_STRIP) -g $@ diff --git a/kernel/bpf/preload/iterators/README b/kernel/bpf/preload/iterators/README index 7fd6d39a9ad2..98e7c90ea012 100644 --- a/kernel/bpf/preload/iterators/README +++ b/kernel/bpf/preload/iterators/README @@ -1,4 +1,7 @@ WARNING: -If you change "iterators.bpf.c" do "make -j" in this directory to rebuild "iterators.skel.h". +If you change "iterators.bpf.c" do "make -j" in this directory to +rebuild "iterators.lskel-little-endian.h". Then, on a big-endian +machine, do "make -j big" in this directory to rebuild +"iterators.lskel-big-endian.h". Commit both resulting headers. Make sure to have clang 10 installed. See Documentation/bpf/bpf_devel_QA.rst diff --git a/kernel/bpf/preload/iterators/iterators.lskel-big-endian.h b/kernel/bpf/preload/iterators/iterators.lskel-big-endian.h new file mode 100644 index 000000000000..ebdc6c0cdb70 --- /dev/null +++ b/kernel/bpf/preload/iterators/iterators.lskel-big-endian.h @@ -0,0 +1,419 @@ +/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ +/* THIS FILE IS AUTOGENERATED BY BPFTOOL! */ +#ifndef __ITERATORS_BPF_SKEL_H__ +#define __ITERATORS_BPF_SKEL_H__ + +#include <bpf/skel_internal.h> + +struct iterators_bpf { + struct bpf_loader_ctx ctx; + struct { + struct bpf_map_desc rodata; + } maps; + struct { + struct bpf_prog_desc dump_bpf_map; + struct bpf_prog_desc dump_bpf_prog; + } progs; + struct { + int dump_bpf_map_fd; + int dump_bpf_prog_fd; + } links; +}; + +static inline int +iterators_bpf__dump_bpf_map__attach(struct iterators_bpf *skel) +{ + int prog_fd = skel->progs.dump_bpf_map.prog_fd; + int fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER); + + if (fd > 0) + skel->links.dump_bpf_map_fd = fd; + return fd; +} + +static inline int +iterators_bpf__dump_bpf_prog__attach(struct iterators_bpf *skel) +{ + int prog_fd = skel->progs.dump_bpf_prog.prog_fd; + int fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER); + + if (fd > 0) + skel->links.dump_bpf_prog_fd = fd; + return fd; +} + +static inline int +iterators_bpf__attach(struct iterators_bpf *skel) +{ + int ret = 0; + + ret = ret < 0 ? ret : iterators_bpf__dump_bpf_map__attach(skel); + ret = ret < 0 ? ret : iterators_bpf__dump_bpf_prog__attach(skel); + return ret < 0 ? ret : 0; +} + +static inline void +iterators_bpf__detach(struct iterators_bpf *skel) +{ + skel_closenz(skel->links.dump_bpf_map_fd); + skel_closenz(skel->links.dump_bpf_prog_fd); +} +static void +iterators_bpf__destroy(struct iterators_bpf *skel) +{ + if (!skel) + return; + iterators_bpf__detach(skel); + skel_closenz(skel->progs.dump_bpf_map.prog_fd); + skel_closenz(skel->progs.dump_bpf_prog.prog_fd); + skel_closenz(skel->maps.rodata.map_fd); + skel_free(skel); +} +static inline struct iterators_bpf * +iterators_bpf__open(void) +{ + struct iterators_bpf *skel; + + skel = skel_alloc(sizeof(*skel)); + if (!skel) + goto cleanup; + skel->ctx.sz = (void *)&skel->links - (void *)skel; + return skel; +cleanup: + iterators_bpf__destroy(skel); + return NULL; +} + +static inline int +iterators_bpf__load(struct iterators_bpf *skel) +{ + struct bpf_load_and_run_opts opts = {}; + int err; + + opts.ctx = (struct bpf_loader_ctx *)skel; + opts.data_sz = 6008; + opts.data = (void *)"\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xeb\x9f\x01\0\ +\0\0\0\x18\0\0\0\0\0\0\x04\x1c\0\0\x04\x1c\0\0\x05\x18\0\0\0\0\x02\0\0\0\0\0\0\ +\x02\0\0\0\x01\x04\0\0\x02\0\0\0\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\0\0\ +\0\x04\0\0\0\x40\0\0\0\0\x02\0\0\0\0\0\0\x08\0\0\0\0\x02\0\0\0\0\0\0\x0d\0\0\0\ +\0\x0d\0\0\x01\0\0\0\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\x01\0\0\0\0\0\0\x04\x01\ +\0\0\x20\0\0\0\x24\x0c\0\0\x01\0\0\0\x05\0\0\0\xc2\x04\0\0\x03\0\0\0\x18\0\0\0\ +\xd0\0\0\0\x09\0\0\0\0\0\0\0\xd4\0\0\0\x0b\0\0\0\x40\0\0\0\xdf\0\0\0\x0b\0\0\0\ +\x80\0\0\0\0\x02\0\0\0\0\0\0\x0a\0\0\0\xe7\x07\0\0\0\0\0\0\0\0\0\0\xf0\x08\0\0\ +\0\0\0\0\x0c\0\0\0\xf6\x01\0\0\0\0\0\0\x08\0\0\0\x40\0\0\x01\xb3\x04\0\0\x03\0\ +\0\0\x18\0\0\x01\xbb\0\0\0\x0e\0\0\0\0\0\0\x01\xbe\0\0\0\x11\0\0\0\x20\0\0\x01\ +\xc3\0\0\0\x0e\0\0\0\xa0\0\0\x01\xcf\x08\0\0\0\0\0\0\x0f\0\0\x01\xd5\x01\0\0\0\ +\0\0\0\x04\0\0\0\x20\0\0\x01\xe2\x01\0\0\0\0\0\0\x01\x01\0\0\x08\0\0\0\0\x03\0\ +\0\0\0\0\0\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\x01\xe7\x01\0\0\0\0\0\0\x04\0\0\ +\0\x20\0\0\0\0\x02\0\0\0\0\0\0\x14\0\0\x02\x4b\x04\0\0\x02\0\0\0\x10\0\0\0\x13\ +\0\0\0\x03\0\0\0\0\0\0\x02\x5e\0\0\0\x15\0\0\0\x40\0\0\0\0\x02\0\0\0\0\0\0\x18\ +\0\0\0\0\x0d\0\0\x01\0\0\0\x06\0\0\0\x1c\0\0\0\x13\0\0\x02\x63\x0c\0\0\x01\0\0\ +\0\x16\0\0\x02\xaf\x04\0\0\x01\0\0\0\x08\0\0\x02\xb8\0\0\0\x19\0\0\0\0\0\0\0\0\ +\x02\0\0\0\0\0\0\x1a\0\0\x03\x09\x04\0\0\x06\0\0\0\x38\0\0\x01\xbb\0\0\0\x0e\0\ +\0\0\0\0\0\x01\xbe\0\0\0\x11\0\0\0\x20\0\0\x03\x16\0\0\0\x1b\0\0\0\xc0\0\0\x03\ +\x27\0\0\0\x15\0\0\x01\0\0\0\x03\x30\0\0\0\x1d\0\0\x01\x40\0\0\x03\x3a\0\0\0\ +\x1e\0\0\x01\x80\0\0\0\0\x02\0\0\0\0\0\0\x1c\0\0\0\0\x0a\0\0\0\0\0\0\x10\0\0\0\ +\0\x02\0\0\0\0\0\0\x1f\0\0\0\0\x02\0\0\0\0\0\0\x20\0\0\x03\x84\x04\0\0\x02\0\0\ +\0\x08\0\0\x03\x92\0\0\0\x0e\0\0\0\0\0\0\x03\x9b\0\0\0\x0e\0\0\0\x20\0\0\x03\ +\x3a\x04\0\0\x03\0\0\0\x18\0\0\x03\xa5\0\0\0\x1b\0\0\0\0\0\0\x03\xad\0\0\0\x21\ +\0\0\0\x40\0\0\x03\xb3\0\0\0\x23\0\0\0\x80\0\0\0\0\x02\0\0\0\0\0\0\x22\0\0\0\0\ +\x02\0\0\0\0\0\0\x24\0\0\x03\xb7\x04\0\0\x01\0\0\0\x04\0\0\x03\xc2\0\0\0\x0e\0\ +\0\0\0\0\0\x04\x2b\x04\0\0\x01\0\0\0\x04\0\0\x04\x34\0\0\0\x0e\0\0\0\0\0\0\0\0\ +\x03\0\0\0\0\0\0\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\x04\xaa\x0e\0\0\0\0\0\0\ +\x25\0\0\0\0\0\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\x04\ +\xbe\x0e\0\0\0\0\0\0\x27\0\0\0\0\0\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\x1c\0\0\0\x12\ +\0\0\0\x20\0\0\x04\xd4\x0e\0\0\0\0\0\0\x29\0\0\0\0\0\0\0\0\x03\0\0\0\0\0\0\0\0\ +\0\0\x1c\0\0\0\x12\0\0\0\x11\0\0\x04\xe9\x0e\0\0\0\0\0\0\x2b\0\0\0\0\0\0\0\0\ +\x03\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\x05\0\x0e\0\0\0\0\0\0\x2d\ +\0\0\0\x01\0\0\x05\x08\x0f\0\0\x04\0\0\0\x62\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\0\ +\x28\0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\0\ +\0\x11\0\0\x05\x10\x0f\0\0\x01\0\0\0\x04\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\x62\x70\ +\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\x74\x61\ +\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\ +\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x30\x3a\ +\x30\0\x2f\x68\x6f\x6d\x65\x2f\x69\x69\x69\x2f\x6c\x69\x6e\x75\x78\x2d\x6b\x65\ +\x72\x6e\x65\x6c\x2d\x74\x6f\x6f\x6c\x63\x68\x61\x69\x6e\x2f\x73\x72\x63\x2f\ +\x6c\x69\x6e\x75\x78\x2f\x6b\x65\x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\x70\x72\ +\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2f\x69\x74\x65\ +\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\x63\x74\ +\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\x73\x65\x71\x20\x3d\x20\x63\x74\ +\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\x69\x74\ +\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\x6e\x5f\ +\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\x65\0\x5f\ +\x5f\x75\x36\x34\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x6c\x6f\x6e\x67\x20\x6c\ +\x6f\x6e\x67\0\x30\x3a\x31\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\ +\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\x61\x70\ +\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\x29\0\x30\x3a\x32\0\x09\x5f\x5f\x75\ +\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x20\x63\x74\x78\x2d\x3e\x6d\ +\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\x3b\0\x09\x69\x66\x20\x28\x73\ +\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\x29\0\x09\x09\x42\x50\x46\x5f\x53\ +\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\ +\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ +\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\ +\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\x6d\x61\x78\x5f\x65\x6e\x74\x72\ +\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\x73\x69\x67\x6e\x65\x64\x20\x69\ +\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\x52\x41\x59\x5f\x53\x49\x5a\x45\ +\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\ +\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\ +\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\x2d\x3e\x69\x64\x2c\x20\x6d\x61\ +\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\x70\x2d\x3e\x6d\x61\x78\x5f\x65\ +\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\ +\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\x6f\x67\0\x64\x75\x6d\x70\x5f\ +\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\ +\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\x70\x66\x5f\x70\x72\x6f\x67\x20\ +\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\ +\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\x62\x70\x66\x5f\x70\x72\x6f\x67\0\ +\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\ +\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\ +\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\ +\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\x5c\x6e\x22\ +\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\x5f\x61\x75\x78\0\x61\x74\x74\x61\ +\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\x65\0\x64\x73\x74\x5f\x70\x72\x6f\ +\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x62\x74\x66\0\x09\x42\x50\x46\x5f\ +\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\x34\ +\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x5c\x6e\x22\x2c\x20\x61\ +\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\x30\x3a\x35\0\x09\x69\x66\x20\x28\ +\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\ +\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\x70\x65\x5f\x69\x64\0\x30\0\x73\x74\ +\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\0\x68\x64\x72\0\x62\x74\x66\x5f\x68\ +\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\x65\x6e\0\x09\x74\x79\x70\x65\x73\ +\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\x65\x73\x3b\0\x09\x62\x70\x66\x5f\ +\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\x6b\x65\x72\x6e\x65\x6c\x28\x26\ +\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\x29\x2c\x20\x74\x79\x70\x65\x73\ +\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\x09\x73\x74\x72\x20\x3d\x20\x62\ +\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\x3b\0\x62\x74\x66\x5f\x74\x79\x70\ +\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\x6e\x61\x6d\x65\x5f\x6f\x66\x66\ +\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\x5f\x52\x45\x41\x44\x28\x74\x2c\ +\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\x30\x3a\x32\x3a\x30\0\x09\x69\ +\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3e\x3d\x20\x62\x74\x66\x2d\ +\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\x29\0\x09\x72\x65\x74\x75\x72\ +\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x3b\0\x30\x3a\ +\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\ +\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\ +\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\ +\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\ +\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\x4e\x53\x45\0\x2e\x72\x6f\x64\ +\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\x09\x4c\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x02\0\0\0\x04\0\0\0\x62\0\0\0\ +\x01\0\0\0\x80\0\0\0\0\0\0\0\0\x69\x74\x65\x72\x61\x74\x6f\x72\x2e\x72\x6f\x64\ +\x61\x74\x61\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x2f\0\0\0\0\0\0\0\0\0\0\0\0\x20\ +\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\ +\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\x25\ +\x2d\x31\x36\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\ +\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\ +\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x21\0\0\0\0\0\0\x79\x62\0\0\ +\0\0\0\0\x79\x71\0\x08\0\0\0\0\x15\x70\0\x1a\0\0\0\0\x79\x12\0\x10\0\0\0\0\x55\ +\x10\0\x08\0\0\0\0\xbf\x4a\0\0\0\0\0\0\x07\x40\0\0\xff\xff\xff\xe8\xbf\x16\0\0\ +\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xb7\x30\0\0\0\0\0\x23\xb7\x50\0\0\ +\0\0\0\0\x85\0\0\0\0\0\0\x7e\x61\x17\0\0\0\0\0\0\x7b\xa1\xff\xe8\0\0\0\0\xb7\ +\x10\0\0\0\0\0\x04\xbf\x27\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\0\x7b\xa2\xff\xf0\0\0\ +\0\0\x61\x17\0\x14\0\0\0\0\x7b\xa1\xff\xf8\0\0\0\0\xbf\x4a\0\0\0\0\0\0\x07\x40\ +\0\0\xff\xff\xff\xe8\xbf\x16\0\0\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\0\x23\ +\xb7\x30\0\0\0\0\0\x0e\xb7\x50\0\0\0\0\0\x18\x85\0\0\0\0\0\0\x7e\xb7\0\0\0\0\0\ +\0\0\x95\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x07\0\0\0\0\0\0\0\x42\0\0\0\x9a\0\x01\x3c\ +\x1e\0\0\0\x01\0\0\0\x42\0\0\0\x9a\0\x01\x3c\x24\0\0\0\x02\0\0\0\x42\0\0\x01\ +\x0d\0\x01\x44\x1d\0\0\0\x03\0\0\0\x42\0\0\x01\x2e\0\x01\x4c\x06\0\0\0\x04\0\0\ +\0\x42\0\0\x01\x3d\0\x01\x40\x1d\0\0\0\x05\0\0\0\x42\0\0\x01\x62\0\x01\x58\x06\ +\0\0\0\x07\0\0\0\x42\0\0\x01\x75\0\x01\x5c\x03\0\0\0\x0e\0\0\0\x42\0\0\x01\xfb\ +\0\x01\x64\x02\0\0\0\x1e\0\0\0\x42\0\0\x02\x49\0\x01\x6c\x01\0\0\0\0\0\0\0\x02\ +\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x02\0\ +\0\x01\x09\0\0\0\0\0\0\0\x20\0\0\0\x08\0\0\x01\x39\0\0\0\0\0\0\0\x70\0\0\0\x0d\ +\0\0\0\x3e\0\0\0\0\0\0\0\x80\0\0\0\x0d\0\0\x01\x09\0\0\0\0\0\0\0\xa0\0\0\0\x0d\ +\0\0\x01\x39\0\0\0\0\0\0\0\x1a\0\0\0\x20\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\ +\x6d\x61\x70\0\0\0\0\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\ +\x01\0\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x09\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\ +\x62\x70\x66\x5f\x6d\x61\x70\0\0\0\0\0\0\0\0\x47\x50\x4c\0\0\0\0\0\x79\x21\0\0\ +\0\0\0\0\x79\x62\0\0\0\0\0\0\x79\x11\0\x08\0\0\0\0\x15\x10\0\x3b\0\0\0\0\x79\ +\x71\0\0\0\0\0\0\x79\x12\0\x10\0\0\0\0\x55\x10\0\x08\0\0\0\0\xbf\x4a\0\0\0\0\0\ +\0\x07\x40\0\0\xff\xff\xff\xd0\xbf\x16\0\0\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\x31\xb7\x30\0\0\0\0\0\x20\xb7\x50\0\0\0\0\0\0\x85\0\0\0\0\0\0\x7e\x7b\ +\xa6\xff\xc8\0\0\0\0\x61\x17\0\0\0\0\0\0\x7b\xa1\xff\xd0\0\0\0\0\xb7\x30\0\0\0\ +\0\0\x04\xbf\x97\0\0\0\0\0\0\x0f\x93\0\0\0\0\0\0\x79\x17\0\x28\0\0\0\0\x79\x87\ +\0\x30\0\0\0\0\x15\x80\0\x18\0\0\0\0\xb7\x20\0\0\0\0\0\0\x0f\x12\0\0\0\0\0\0\ +\x61\x11\0\x04\0\0\0\0\x79\x38\0\x08\0\0\0\0\x67\x10\0\0\0\0\0\x03\x0f\x31\0\0\ +\0\0\0\0\x79\x68\0\0\0\0\0\0\xbf\x1a\0\0\0\0\0\0\x07\x10\0\0\xff\xff\xff\xf8\ +\xb7\x20\0\0\0\0\0\x08\x85\0\0\0\0\0\0\x71\xb7\x10\0\0\0\0\0\0\x79\x3a\xff\xf8\ +\0\0\0\0\x0f\x31\0\0\0\0\0\0\xbf\x1a\0\0\0\0\0\0\x07\x10\0\0\xff\xff\xff\xf4\ +\xb7\x20\0\0\0\0\0\x04\x85\0\0\0\0\0\0\x71\xb7\x30\0\0\0\0\0\x04\x61\x1a\xff\ +\xf4\0\0\0\0\x61\x28\0\x10\0\0\0\0\x3d\x12\0\x02\0\0\0\0\x0f\x61\0\0\0\0\0\0\ +\xbf\x96\0\0\0\0\0\0\x7b\xa9\xff\xd8\0\0\0\0\x79\x17\0\x18\0\0\0\0\x7b\xa1\xff\ +\xe0\0\0\0\0\x79\x17\0\x20\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x13\0\0\0\0\0\0\x7b\ +\xa1\xff\xe8\0\0\0\0\xbf\x4a\0\0\0\0\0\0\x07\x40\0\0\xff\xff\xff\xd0\x79\x1a\ +\xff\xc8\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\0\x51\xb7\x30\0\0\0\0\0\x11\ +\xb7\x50\0\0\0\0\0\x20\x85\0\0\0\0\0\0\x7e\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\x17\0\0\0\0\0\0\0\x42\0\0\0\x9a\0\x01\x80\x1e\0\0\0\x01\0\0\0\ +\x42\0\0\0\x9a\0\x01\x80\x24\0\0\0\x02\0\0\0\x42\0\0\x02\x7f\0\x01\x88\x1f\0\0\ +\0\x03\0\0\0\x42\0\0\x02\xa3\0\x01\x94\x06\0\0\0\x04\0\0\0\x42\0\0\x02\xbc\0\ +\x01\xa0\x0e\0\0\0\x05\0\0\0\x42\0\0\x01\x3d\0\x01\x84\x1d\0\0\0\x06\0\0\0\x42\ +\0\0\x01\x62\0\x01\xa4\x06\0\0\0\x08\0\0\0\x42\0\0\x02\xce\0\x01\xa8\x03\0\0\0\ +\x10\0\0\0\x42\0\0\x03\x3e\0\x01\xb0\x02\0\0\0\x17\0\0\0\x42\0\0\x03\x79\0\x01\ +\x04\x06\0\0\0\x1a\0\0\0\x42\0\0\x03\x3e\0\x01\xb0\x02\0\0\0\x1b\0\0\0\x42\0\0\ +\x03\xca\0\x01\x10\x0f\0\0\0\x1c\0\0\0\x42\0\0\x03\xdf\0\x01\x14\x2d\0\0\0\x1e\ +\0\0\0\x42\0\0\x04\x16\0\x01\x0c\x0d\0\0\0\x20\0\0\0\x42\0\0\x03\x3e\0\x01\xb0\ +\x02\0\0\0\x21\0\0\0\x42\0\0\x03\xdf\0\x01\x14\x02\0\0\0\x24\0\0\0\x42\0\0\x04\ +\x3d\0\x01\x18\x0d\0\0\0\x27\0\0\0\x42\0\0\x03\x3e\0\x01\xb0\x02\0\0\0\x28\0\0\ +\0\x42\0\0\x04\x3d\0\x01\x18\x0d\0\0\0\x2b\0\0\0\x42\0\0\x04\x3d\0\x01\x18\x0d\ +\0\0\0\x2c\0\0\0\x42\0\0\x04\x6b\0\x01\x1c\x1b\0\0\0\x2d\0\0\0\x42\0\0\x04\x6b\ +\0\x01\x1c\x06\0\0\0\x2e\0\0\0\x42\0\0\x04\x8e\0\x01\x24\x0d\0\0\0\x30\0\0\0\ +\x42\0\0\x03\x3e\0\x01\xb0\x02\0\0\0\x3f\0\0\0\x42\0\0\x02\x49\0\x01\xc0\x01\0\ +\0\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\ +\x10\0\0\0\x14\0\0\x01\x09\0\0\0\0\0\0\0\x20\0\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\ +\x28\0\0\0\x08\0\0\x01\x39\0\0\0\0\0\0\0\x80\0\0\0\x1a\0\0\0\x3e\0\0\0\0\0\0\0\ +\x90\0\0\0\x1a\0\0\x01\x09\0\0\0\0\0\0\0\xa8\0\0\0\x1a\0\0\x03\x71\0\0\0\0\0\0\ +\0\xb0\0\0\0\x1a\0\0\x03\x75\0\0\0\0\0\0\0\xc0\0\0\0\x1f\0\0\x03\xa3\0\0\0\0\0\ +\0\0\xd8\0\0\0\x20\0\0\x01\x09\0\0\0\0\0\0\0\xf0\0\0\0\x20\0\0\0\x3e\0\0\0\0\0\ +\0\x01\x18\0\0\0\x24\0\0\0\x3e\0\0\0\0\0\0\x01\x50\0\0\0\x1a\0\0\x01\x09\0\0\0\ +\0\0\0\x01\x60\0\0\0\x20\0\0\x04\x65\0\0\0\0\0\0\x01\x88\0\0\0\x1a\0\0\x01\x39\ +\0\0\0\0\0\0\x01\x98\0\0\0\x1a\0\0\x04\xa6\0\0\0\0\0\0\x01\xa0\0\0\0\x18\0\0\0\ +\x3e\0\0\0\0\0\0\0\x1a\0\0\0\x41\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\ +\x6f\x67\0\0\0\0\0\0\0\0\0\0\x1c\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\0\0\0\x01\0\ +\0\0\x10\0\0\0\0\0\0\0\0\0\0\0\x19\0\0\0\x01\0\0\0\0\0\0\0\x12\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x10\0\0\0\0\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x62\x70\ +\x66\x5f\x70\x72\x6f\x67\0\0\0\0\0\0\0"; + opts.insns_sz = 2216; + opts.insns = (void *)"\ +\xbf\x61\0\0\0\0\0\0\xbf\x1a\0\0\0\0\0\0\x07\x10\0\0\xff\xff\xff\x78\xb7\x20\0\ +\0\0\0\0\x88\xb7\x30\0\0\0\0\0\0\x85\0\0\0\0\0\0\x71\x05\0\0\x14\0\0\0\0\x61\ +\x1a\xff\x78\0\0\0\0\xd5\x10\0\x01\0\0\0\0\x85\0\0\0\0\0\0\xa8\x61\x1a\xff\x7c\ +\0\0\0\0\xd5\x10\0\x01\0\0\0\0\x85\0\0\0\0\0\0\xa8\x61\x1a\xff\x80\0\0\0\0\xd5\ +\x10\0\x01\0\0\0\0\x85\0\0\0\0\0\0\xa8\x61\x1a\xff\x84\0\0\0\0\xd5\x10\0\x01\0\ +\0\0\0\x85\0\0\0\0\0\0\xa8\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x10\0\0\0\0\ +\0\0\xd5\x10\0\x02\0\0\0\0\xbf\x91\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa8\xbf\x07\0\0\ +\0\0\0\0\x95\0\0\0\0\0\0\0\x61\x06\0\x08\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\ +\0\x0e\x68\x63\x10\0\0\0\0\0\0\x61\x06\0\x0c\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\ +\0\0\0\x0e\x64\x63\x10\0\0\0\0\0\0\x79\x06\0\x10\0\0\0\0\x18\x16\0\0\0\0\0\0\0\ +\0\0\0\0\0\x0e\x58\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x05\0\ +\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x0e\x50\x7b\x10\0\0\0\0\0\0\xb7\x10\0\0\0\0\0\ +\x12\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\x0e\x50\xb7\x30\0\0\0\0\0\x1c\x85\0\0\0\0\ +\0\0\xa6\xbf\x70\0\0\0\0\0\0\xc5\x70\xff\xd4\0\0\0\0\x63\xa7\xff\x78\0\0\0\0\ +\x61\x0a\xff\x78\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x0e\xa0\x63\x10\0\0\0\ +\0\0\0\x61\x06\0\x1c\0\0\0\0\x15\0\0\x03\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\ +\0\x0e\x7c\x63\x10\0\0\0\0\0\0\xb7\x10\0\0\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\0\ +\0\0\x0e\x70\xb7\x30\0\0\0\0\0\x48\x85\0\0\0\0\0\0\xa6\xbf\x70\0\0\0\0\0\0\xc5\ +\x70\xff\xc3\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x63\x17\0\0\0\0\0\0\ +\x79\x36\0\x20\0\0\0\0\x15\x30\0\x08\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\ +\x0e\xb8\xb7\x20\0\0\0\0\0\x62\x61\x06\0\x04\0\0\0\0\x45\0\0\x02\0\0\0\x01\x85\ +\0\0\0\0\0\0\x94\x05\0\0\x01\0\0\0\0\x85\0\0\0\0\0\0\x71\x18\x26\0\0\0\0\0\0\0\ +\0\0\0\0\0\0\0\x61\x02\0\0\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x28\x63\ +\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x20\x18\x16\0\0\0\0\0\0\0\ +\0\0\0\0\0\x0f\x30\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x0e\xb8\ +\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x38\x7b\x10\0\0\0\0\0\0\xb7\x10\0\0\0\0\0\ +\x02\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x28\xb7\x30\0\0\0\0\0\x20\x85\0\0\0\0\ +\0\0\xa6\xbf\x70\0\0\0\0\0\0\xc5\x70\xff\x9f\0\0\0\0\x18\x26\0\0\0\0\0\0\0\0\0\ +\0\0\0\0\0\x61\x02\0\0\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x48\x63\x10\ +\0\0\0\0\0\0\xb7\x10\0\0\0\0\0\x16\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x48\xb7\ +\x30\0\0\0\0\0\x04\x85\0\0\0\0\0\0\xa6\xbf\x70\0\0\0\0\0\0\xc5\x70\xff\x92\0\0\ +\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x50\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\ +\x11\x70\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x0f\x58\x18\x16\0\ +\0\0\0\0\0\0\0\0\0\0\0\x11\x68\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\ +\0\0\x10\x58\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\xb0\x7b\x10\0\0\0\0\0\0\x18\ +\x06\0\0\0\0\0\0\0\0\0\0\0\0\x10\x60\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\xc0\ +\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x10\xf0\x18\x16\0\0\0\0\0\ +\0\0\0\0\0\0\0\x11\xe0\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\ +\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\xd8\x7b\x10\0\0\0\0\0\0\x61\x06\0\x08\0\0\ +\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\x78\x63\x10\0\0\0\0\0\0\x61\x06\0\x0c\ +\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\x7c\x63\x10\0\0\0\0\0\0\x79\x06\0\ +\x10\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\x80\x7b\x10\0\0\0\0\0\0\x61\ +\x0a\xff\x78\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\xa8\x63\x10\0\0\0\0\0\ +\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x11\xf0\xb7\x20\0\0\0\0\0\x11\xb7\x30\0\0\0\ +\0\0\x0c\xb7\x40\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa7\xbf\x70\0\0\0\0\0\0\xc5\x70\ +\xff\x5c\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x11\x60\x63\x07\0\x6c\0\0\0\0\ +\x77\x70\0\0\0\0\0\x20\x63\x07\0\x70\0\0\0\0\xb7\x10\0\0\0\0\0\x05\x18\x26\0\0\ +\0\0\0\0\0\0\0\0\0\0\x11\x60\xb7\x30\0\0\0\0\0\x8c\x85\0\0\0\0\0\0\xa6\xbf\x70\ +\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x11\xd0\x61\x10\0\0\0\0\0\0\xd5\ +\x10\0\x02\0\0\0\0\xbf\x91\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa8\xc5\x70\xff\x4a\0\0\ +\0\0\x63\xa7\xff\x80\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x12\x08\x18\x16\0\ +\0\0\0\0\0\0\0\0\0\0\0\x16\xe0\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\ +\0\0\x12\x10\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x16\xd8\x7b\x10\0\0\0\0\0\0\x18\ +\x06\0\0\0\0\0\0\0\0\0\0\0\0\x14\x18\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x17\x20\ +\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x14\x20\x18\x16\0\0\0\0\0\ +\0\0\0\0\0\0\0\x17\x30\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x15\ +\xb0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x17\x50\x7b\x10\0\0\0\0\0\0\x18\x06\0\0\0\ +\0\0\0\0\0\0\0\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x17\x48\x7b\x10\0\0\0\0\ +\0\0\x61\x06\0\x08\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x16\xe8\x63\x10\0\0\ +\0\0\0\0\x61\x06\0\x0c\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x16\xec\x63\x10\ +\0\0\0\0\0\0\x79\x06\0\x10\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x16\xf0\x7b\ +\x10\0\0\0\0\0\0\x61\x0a\xff\x78\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x17\ +\x18\x63\x10\0\0\0\0\0\0\x18\x16\0\0\0\0\0\0\0\0\0\0\0\0\x17\x60\xb7\x20\0\0\0\ +\0\0\x12\xb7\x30\0\0\0\0\0\x0c\xb7\x40\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa7\xbf\x70\ +\0\0\0\0\0\0\xc5\x70\xff\x13\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x16\xd0\ +\x63\x07\0\x6c\0\0\0\0\x77\x70\0\0\0\0\0\x20\x63\x07\0\x70\0\0\0\0\xb7\x10\0\0\ +\0\0\0\x05\x18\x26\0\0\0\0\0\0\0\0\0\0\0\0\x16\xd0\xb7\x30\0\0\0\0\0\x8c\x85\0\ +\0\0\0\0\0\xa6\xbf\x70\0\0\0\0\0\0\x18\x06\0\0\0\0\0\0\0\0\0\0\0\0\x17\x40\x61\ +\x10\0\0\0\0\0\0\xd5\x10\0\x02\0\0\0\0\xbf\x91\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa8\ +\xc5\x70\xff\x01\0\0\0\0\x63\xa7\xff\x84\0\0\0\0\x61\x1a\xff\x78\0\0\0\0\xd5\ +\x10\0\x02\0\0\0\0\xbf\x91\0\0\0\0\0\0\x85\0\0\0\0\0\0\xa8\x61\x0a\xff\x80\0\0\ +\0\0\x63\x60\0\x28\0\0\0\0\x61\x0a\xff\x84\0\0\0\0\x63\x60\0\x2c\0\0\0\0\x18\ +\x16\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x61\x01\0\0\0\0\0\0\x63\x60\0\x18\0\0\0\0\xb7\ +\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0"; + err = bpf_load_and_run(&opts); + if (err < 0) + return err; + return 0; +} + +static inline struct iterators_bpf * +iterators_bpf__open_and_load(void) +{ + struct iterators_bpf *skel; + + skel = iterators_bpf__open(); + if (!skel) + return NULL; + if (iterators_bpf__load(skel)) { + iterators_bpf__destroy(skel); + return NULL; + } + return skel; +} + +__attribute__((unused)) static void +iterators_bpf__assert(struct iterators_bpf *s __attribute__((unused))) +{ +#ifdef __cplusplus +#define _Static_assert static_assert +#endif +#ifdef __cplusplus +#undef _Static_assert +#endif +} + +#endif /* __ITERATORS_BPF_SKEL_H__ */ diff --git a/kernel/bpf/preload/iterators/iterators.lskel.h b/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h index 70f236a82fe1..70f236a82fe1 100644 --- a/kernel/bpf/preload/iterators/iterators.lskel.h +++ b/kernel/bpf/preload/iterators/iterators.lskel-little-endian.h diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c index 99417b387547..bcc97613de76 100644 --- a/kernel/bpf/syscall.c +++ b/kernel/bpf/syscall.c @@ -390,7 +390,7 @@ static int bpf_map_alloc_id(struct bpf_map *map) return id > 0 ? 0 : id; } -void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock) +void bpf_map_free_id(struct bpf_map *map) { unsigned long flags; @@ -402,18 +402,12 @@ void bpf_map_free_id(struct bpf_map *map, bool do_idr_lock) if (!map->id) return; - if (do_idr_lock) - spin_lock_irqsave(&map_idr_lock, flags); - else - __acquire(&map_idr_lock); + spin_lock_irqsave(&map_idr_lock, flags); idr_remove(&map_idr, map->id); map->id = 0; - if (do_idr_lock) - spin_unlock_irqrestore(&map_idr_lock, flags); - else - __release(&map_idr_lock); + spin_unlock_irqrestore(&map_idr_lock, flags); } #ifdef CONFIG_MEMCG_KMEM @@ -706,13 +700,13 @@ static void bpf_map_put_uref(struct bpf_map *map) } /* decrement map refcnt and schedule it for freeing via workqueue - * (unrelying map implementation ops->map_free() might sleep) + * (underlying map implementation ops->map_free() might sleep) */ -static void __bpf_map_put(struct bpf_map *map, bool do_idr_lock) +void bpf_map_put(struct bpf_map *map) { if (atomic64_dec_and_test(&map->refcnt)) { /* bpf_map_free_id() must be called first */ - bpf_map_free_id(map, do_idr_lock); + bpf_map_free_id(map); btf_put(map->btf); INIT_WORK(&map->work, bpf_map_free_deferred); /* Avoid spawning kworkers, since they all might contend @@ -721,11 +715,6 @@ static void __bpf_map_put(struct bpf_map *map, bool do_idr_lock) queue_work(system_unbound_wq, &map->work); } } - -void bpf_map_put(struct bpf_map *map) -{ - __bpf_map_put(map, true); -} EXPORT_SYMBOL_GPL(bpf_map_put); void bpf_map_put_with_uref(struct bpf_map *map) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 205dc9edcaa9..ca826bd1eba3 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1205,12 +1205,13 @@ void rebuild_sched_domains(void) /** * update_tasks_cpumask - Update the cpumasks of tasks in the cpuset. * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed + * @new_cpus: the temp variable for the new effective_cpus mask * * Iterate through each task of @cs updating its cpus_allowed to the * effective cpuset's. As this function is called with cpuset_rwsem held, * cpuset membership stays stable. */ -static void update_tasks_cpumask(struct cpuset *cs) +static void update_tasks_cpumask(struct cpuset *cs, struct cpumask *new_cpus) { struct css_task_iter it; struct task_struct *task; @@ -1224,7 +1225,10 @@ static void update_tasks_cpumask(struct cpuset *cs) if (top_cs && (task->flags & PF_KTHREAD) && kthread_is_per_cpu(task)) continue; - set_cpus_allowed_ptr(task, cs->effective_cpus); + + cpumask_and(new_cpus, cs->effective_cpus, + task_cpu_possible_mask(task)); + set_cpus_allowed_ptr(task, new_cpus); } css_task_iter_end(&it); } @@ -1509,7 +1513,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd, spin_unlock_irq(&callback_lock); if (adding || deleting) - update_tasks_cpumask(parent); + update_tasks_cpumask(parent, tmp->new_cpus); /* * Set or clear CS_SCHED_LOAD_BALANCE when partcmd_update, if necessary. @@ -1661,7 +1665,7 @@ update_parent_subparts: WARN_ON(!is_in_v2_mode() && !cpumask_equal(cp->cpus_allowed, cp->effective_cpus)); - update_tasks_cpumask(cp); + update_tasks_cpumask(cp, tmp->new_cpus); /* * On legacy hierarchy, if the effective cpumask of any non- @@ -2309,7 +2313,7 @@ static int update_prstate(struct cpuset *cs, int new_prs) } } - update_tasks_cpumask(parent); + update_tasks_cpumask(parent, tmpmask.new_cpus); if (parent->child_ecpus_count) update_sibling_cpumasks(parent, cs, &tmpmask); @@ -3348,7 +3352,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs, * as the tasks will be migrated to an ancestor. */ if (cpus_updated && !cpumask_empty(cs->cpus_allowed)) - update_tasks_cpumask(cs); + update_tasks_cpumask(cs, new_cpus); if (mems_updated && !nodes_empty(cs->mems_allowed)) update_tasks_nodemask(cs); @@ -3385,7 +3389,7 @@ hotplug_update_tasks(struct cpuset *cs, spin_unlock_irq(&callback_lock); if (cpus_updated) - update_tasks_cpumask(cs); + update_tasks_cpumask(cs, new_cpus); if (mems_updated) update_tasks_nodemask(cs); } @@ -3692,15 +3696,38 @@ void __init cpuset_init_smp(void) * Description: Returns the cpumask_var_t cpus_allowed of the cpuset * attached to the specified @tsk. Guaranteed to return some non-empty * subset of cpu_online_mask, even if this means going outside the - * tasks cpuset. + * tasks cpuset, except when the task is in the top cpuset. **/ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) { unsigned long flags; + struct cpuset *cs; spin_lock_irqsave(&callback_lock, flags); - guarantee_online_cpus(tsk, pmask); + rcu_read_lock(); + + cs = task_cs(tsk); + if (cs != &top_cpuset) + guarantee_online_cpus(tsk, pmask); + /* + * Tasks in the top cpuset won't get update to their cpumasks + * when a hotplug online/offline event happens. So we include all + * offline cpus in the allowed cpu list. + */ + if ((cs == &top_cpuset) || cpumask_empty(pmask)) { + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); + + /* + * We first exclude cpus allocated to partitions. If there is no + * allowable online cpu left, we fall back to all possible cpus. + */ + cpumask_andnot(pmask, possible_mask, top_cpuset.subparts_cpus); + if (!cpumask_intersects(pmask, cpu_online_mask)) + cpumask_copy(pmask, possible_mask); + } + + rcu_read_unlock(); spin_unlock_irqrestore(&callback_lock, flags); } diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c index 793ecff29038..831f1f472bb8 100644 --- a/kernel/cgroup/rstat.c +++ b/kernel/cgroup/rstat.c @@ -26,7 +26,7 @@ static struct cgroup_rstat_cpu *cgroup_rstat_cpu(struct cgroup *cgrp, int cpu) * rstat_cpu->updated_children list. See the comment on top of * cgroup_rstat_cpu definition for details. */ -void cgroup_rstat_updated(struct cgroup *cgrp, int cpu) +__bpf_kfunc void cgroup_rstat_updated(struct cgroup *cgrp, int cpu) { raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); unsigned long flags; @@ -231,7 +231,7 @@ static void cgroup_rstat_flush_locked(struct cgroup *cgrp, bool may_sleep) * * This function may block. */ -void cgroup_rstat_flush(struct cgroup *cgrp) +__bpf_kfunc void cgroup_rstat_flush(struct cgroup *cgrp) { might_sleep(); diff --git a/kernel/events/core.c b/kernel/events/core.c index d56328e5080e..c4be13e50547 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4813,19 +4813,17 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx, cpc = per_cpu_ptr(pmu->cpu_pmu_context, event->cpu); epc = &cpc->epc; - + raw_spin_lock_irq(&ctx->lock); if (!epc->ctx) { atomic_set(&epc->refcount, 1); epc->embedded = 1; - raw_spin_lock_irq(&ctx->lock); list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list); epc->ctx = ctx; - raw_spin_unlock_irq(&ctx->lock); } else { WARN_ON_ONCE(epc->ctx != ctx); atomic_inc(&epc->refcount); } - + raw_spin_unlock_irq(&ctx->lock); return epc; } @@ -4896,33 +4894,30 @@ static void free_epc_rcu(struct rcu_head *head) static void put_pmu_ctx(struct perf_event_pmu_context *epc) { + struct perf_event_context *ctx = epc->ctx; unsigned long flags; - if (!atomic_dec_and_test(&epc->refcount)) + /* + * XXX + * + * lockdep_assert_held(&ctx->mutex); + * + * can't because of the call-site in _free_event()/put_event() + * which isn't always called under ctx->mutex. + */ + if (!atomic_dec_and_raw_lock_irqsave(&epc->refcount, &ctx->lock, flags)) return; - if (epc->ctx) { - struct perf_event_context *ctx = epc->ctx; + WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); - /* - * XXX - * - * lockdep_assert_held(&ctx->mutex); - * - * can't because of the call-site in _free_event()/put_event() - * which isn't always called under ctx->mutex. - */ - - WARN_ON_ONCE(list_empty(&epc->pmu_ctx_entry)); - raw_spin_lock_irqsave(&ctx->lock, flags); - list_del_init(&epc->pmu_ctx_entry); - epc->ctx = NULL; - raw_spin_unlock_irqrestore(&ctx->lock, flags); - } + list_del_init(&epc->pmu_ctx_entry); + epc->ctx = NULL; WARN_ON_ONCE(!list_empty(&epc->pinned_active)); WARN_ON_ONCE(!list_empty(&epc->flexible_active)); + raw_spin_unlock_irqrestore(&ctx->lock, flags); + if (epc->embedded) return; diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c index 5c3fb6168eef..798a9042421f 100644 --- a/kernel/irq/irqdomain.c +++ b/kernel/irq/irqdomain.c @@ -1915,7 +1915,7 @@ static void debugfs_add_domain_dir(struct irq_domain *d) static void debugfs_remove_domain_dir(struct irq_domain *d) { - debugfs_remove(debugfs_lookup(d->name, domain_dir)); + debugfs_lookup_and_remove(d->name, domain_dir); } void __init irq_domain_debugfs_init(struct dentry *root) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 969e8f52f7da..b1cf259854ca 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -6,6 +6,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include <linux/btf.h> #include <linux/capability.h> #include <linux/mm.h> #include <linux/file.h> @@ -975,7 +976,7 @@ void __noclone __crash_kexec(struct pt_regs *regs) } STACK_FRAME_NON_STANDARD(__crash_kexec); -void crash_kexec(struct pt_regs *regs) +__bpf_kfunc void crash_kexec(struct pt_regs *regs) { int old_cpu, this_cpu; diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c index d1313435af2b..c58baf9983cc 100644 --- a/kernel/trace/bpf_trace.c +++ b/kernel/trace/bpf_trace.c @@ -1237,7 +1237,7 @@ __diag_ignore_all("-Wmissing-prototypes", * Return: a bpf_key pointer with a valid key pointer if the key is found, a * NULL pointer otherwise. */ -struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) +__bpf_kfunc struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) { key_ref_t key_ref; struct bpf_key *bkey; @@ -1286,7 +1286,7 @@ struct bpf_key *bpf_lookup_user_key(u32 serial, u64 flags) * Return: a bpf_key pointer with an invalid key pointer set from the * pre-determined ID on success, a NULL pointer otherwise */ -struct bpf_key *bpf_lookup_system_key(u64 id) +__bpf_kfunc struct bpf_key *bpf_lookup_system_key(u64 id) { struct bpf_key *bkey; @@ -1310,7 +1310,7 @@ struct bpf_key *bpf_lookup_system_key(u64 id) * Decrement the reference count of the key inside *bkey*, if the pointer * is valid, and free *bkey*. */ -void bpf_key_put(struct bpf_key *bkey) +__bpf_kfunc void bpf_key_put(struct bpf_key *bkey) { if (bkey->has_ref) key_put(bkey->key); @@ -1330,7 +1330,7 @@ void bpf_key_put(struct bpf_key *bkey) * * Return: 0 on success, a negative value on error. */ -int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr, +__bpf_kfunc int bpf_verify_pkcs7_signature(struct bpf_dynptr_kern *data_ptr, struct bpf_dynptr_kern *sig_ptr, struct bpf_key *trusted_keyring) { diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c index 78ed5f1baa8c..c9e40f692650 100644 --- a/kernel/trace/trace.c +++ b/kernel/trace/trace.c @@ -9148,9 +9148,6 @@ buffer_percent_write(struct file *filp, const char __user *ubuf, if (val > 100) return -EINVAL; - if (!val) - val = 1; - tr->buffer_percent = val; (*ppos)++; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 36c34c9909fa..3509115a0e85 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -763,6 +763,7 @@ config DEBUG_KMEMLEAK select KALLSYMS select CRC32 select STACKDEPOT + select STACKDEPOT_ALWAYS_INIT if !DEBUG_KMEMLEAK_DEFAULT_OFF help Say Y here if you want to enable the memory leak detector. The memory allocation/freeing is traced in a way @@ -1216,7 +1217,7 @@ config SCHED_DEBUG depends on DEBUG_KERNEL && PROC_FS default y help - If you say Y here, the /proc/sched_debug file will be provided + If you say Y here, the /sys/kernel/debug/sched file will be provided that can help debug the scheduler. The runtime overhead of this option is minimal. diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c index 9555b68bb774..1dcca8f2e194 100644 --- a/lib/dec_and_lock.c +++ b/lib/dec_and_lock.c @@ -49,3 +49,34 @@ int _atomic_dec_and_lock_irqsave(atomic_t *atomic, spinlock_t *lock, return 0; } EXPORT_SYMBOL(_atomic_dec_and_lock_irqsave); + +int _atomic_dec_and_raw_lock(atomic_t *atomic, raw_spinlock_t *lock) +{ + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ + if (atomic_add_unless(atomic, -1, 1)) + return 0; + + /* Otherwise do it the slow way */ + raw_spin_lock(lock); + if (atomic_dec_and_test(atomic)) + return 1; + raw_spin_unlock(lock); + return 0; +} +EXPORT_SYMBOL(_atomic_dec_and_raw_lock); + +int _atomic_dec_and_raw_lock_irqsave(atomic_t *atomic, raw_spinlock_t *lock, + unsigned long *flags) +{ + /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */ + if (atomic_add_unless(atomic, -1, 1)) + return 0; + + /* Otherwise do it the slow way */ + raw_spin_lock_irqsave(lock, *flags); + if (atomic_dec_and_test(atomic)) + return 1; + raw_spin_unlock_irqrestore(lock, *flags); + return 0; +} +EXPORT_SYMBOL(_atomic_dec_and_raw_lock_irqsave); diff --git a/lib/maple_tree.c b/lib/maple_tree.c index 26e2045d3cda..5a976393c9ae 100644 --- a/lib/maple_tree.c +++ b/lib/maple_tree.c @@ -670,12 +670,13 @@ static inline unsigned long mte_pivot(const struct maple_enode *mn, unsigned char piv) { struct maple_node *node = mte_to_node(mn); + enum maple_type type = mte_node_type(mn); - if (piv >= mt_pivots[piv]) { + if (piv >= mt_pivots[type]) { WARN_ON(1); return 0; } - switch (mte_node_type(mn)) { + switch (type) { case maple_arange_64: return node->ma64.pivot[piv]; case maple_range_64: @@ -4887,7 +4888,7 @@ static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) unsigned long *pivots, *gaps; void __rcu **slots; unsigned long gap = 0; - unsigned long max, min, index; + unsigned long max, min; unsigned char offset; if (unlikely(mas_is_err(mas))) @@ -4909,8 +4910,7 @@ static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) min = mas_safe_min(mas, pivots, --offset); max = mas_safe_pivot(mas, pivots, offset, type); - index = mas->index; - while (index <= max) { + while (mas->index <= max) { gap = 0; if (gaps) gap = gaps[offset]; @@ -4941,10 +4941,8 @@ static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) min = mas_safe_min(mas, pivots, offset); } - if (unlikely(index > max)) { - mas_set_err(mas, -EBUSY); - return false; - } + if (unlikely((mas->index > max) || (size - 1 > max - mas->index))) + goto no_space; if (unlikely(ma_is_leaf(type))) { mas->offset = offset; @@ -4961,9 +4959,11 @@ static bool mas_rev_awalk(struct ma_state *mas, unsigned long size) return false; ascend: - if (mte_is_root(mas->node)) - mas_set_err(mas, -EBUSY); + if (!mte_is_root(mas->node)) + return false; +no_space: + mas_set_err(mas, -EBUSY); return false; } diff --git a/lib/test_maple_tree.c b/lib/test_maple_tree.c index 497fc93ccf9e..ec847bf4dcb4 100644 --- a/lib/test_maple_tree.c +++ b/lib/test_maple_tree.c @@ -2517,6 +2517,91 @@ static noinline void check_bnode_min_spanning(struct maple_tree *mt) mt_set_non_kernel(0); } +static noinline void check_empty_area_window(struct maple_tree *mt) +{ + unsigned long i, nr_entries = 20; + MA_STATE(mas, mt, 0, 0); + + for (i = 1; i <= nr_entries; i++) + mtree_store_range(mt, i*10, i*10 + 9, + xa_mk_value(i), GFP_KERNEL); + + /* Create another hole besides the one at 0 */ + mtree_store_range(mt, 160, 169, NULL, GFP_KERNEL); + + /* Check lower bounds that don't fit */ + rcu_read_lock(); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 10) != -EBUSY); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 6, 90, 5) != -EBUSY); + + /* Check lower bound that does fit */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 90, 5) != 0); + MT_BUG_ON(mt, mas.index != 5); + MT_BUG_ON(mt, mas.last != 9); + rcu_read_unlock(); + + /* Check one gap that doesn't fit and one that does */ + rcu_read_lock(); + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 5, 217, 9) != 0); + MT_BUG_ON(mt, mas.index != 161); + MT_BUG_ON(mt, mas.last != 169); + + /* Check one gap that does fit above the min */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 3) != 0); + MT_BUG_ON(mt, mas.index != 216); + MT_BUG_ON(mt, mas.last != 218); + + /* Check size that doesn't fit any gap */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 218, 16) != -EBUSY); + + /* + * Check size that doesn't fit the lower end of the window but + * does fit the gap + */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 167, 200, 4) != -EBUSY); + + /* + * Check size that doesn't fit the upper end of the window but + * does fit the gap + */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area_rev(&mas, 100, 162, 4) != -EBUSY); + + /* Check mas_empty_area forward */ + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 9) != 0); + MT_BUG_ON(mt, mas.index != 0); + MT_BUG_ON(mt, mas.last != 8); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 4) != 0); + MT_BUG_ON(mt, mas.index != 0); + MT_BUG_ON(mt, mas.last != 3); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 100, 11) != -EBUSY); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 5, 100, 6) != -EBUSY); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 0, 8, 10) != -EBUSY); + + mas_reset(&mas); + mas_empty_area(&mas, 100, 165, 3); + + mas_reset(&mas); + MT_BUG_ON(mt, mas_empty_area(&mas, 100, 163, 6) != -EBUSY); + rcu_read_unlock(); +} + static DEFINE_MTREE(tree); static int maple_tree_seed(void) { @@ -2765,6 +2850,10 @@ static int maple_tree_seed(void) check_bnode_min_spanning(&tree); mtree_destroy(&tree); + mt_init_flags(&tree, MT_FLAGS_ALLOC_RANGE); + check_empty_area_window(&tree); + mtree_destroy(&tree); + #if defined(BENCH) skip: #endif diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7fcdb98c9e68..bdbfeb6fb393 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5051,6 +5051,9 @@ again: entry = huge_pte_clear_uffd_wp(entry); set_huge_pte_at(dst, addr, dst_pte, entry); } else if (unlikely(is_pte_marker(entry))) { + /* No swap on hugetlb */ + WARN_ON_ONCE( + is_swapin_error_entry(pte_to_swp_entry(entry))); /* * We copy the pte marker only if the dst vma has * uffd-wp enabled. diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 79be13133322..90acfea40c13 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -847,6 +847,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, return SCAN_SUCCEED; } +/* + * See pmd_trans_unstable() for how the result may change out from + * underneath us, even if we hold mmap_lock in read. + */ static int find_pmd_or_thp_or_none(struct mm_struct *mm, unsigned long address, pmd_t **pmd) @@ -865,8 +869,12 @@ static int find_pmd_or_thp_or_none(struct mm_struct *mm, #endif if (pmd_none(pmde)) return SCAN_PMD_NONE; + if (!pmd_present(pmde)) + return SCAN_PMD_NULL; if (pmd_trans_huge(pmde)) return SCAN_PMD_MAPPED; + if (pmd_devmap(pmde)) + return SCAN_PMD_NULL; if (pmd_bad(pmde)) return SCAN_PMD_NULL; return SCAN_SUCCEED; @@ -1642,7 +1650,7 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, * has higher cost too. It would also probably require locking * the anon_vma. */ - if (vma->anon_vma) { + if (READ_ONCE(vma->anon_vma)) { result = SCAN_PAGE_ANON; goto next; } @@ -1671,6 +1679,18 @@ static int retract_page_tables(struct address_space *mapping, pgoff_t pgoff, if ((cc->is_khugepaged || is_target) && mmap_write_trylock(mm)) { /* + * Re-check whether we have an ->anon_vma, because + * collapse_and_free_pmd() requires that either no + * ->anon_vma exists or the anon_vma is locked. + * We already checked ->anon_vma above, but that check + * is racy because ->anon_vma can be populated under the + * mmap lock in read mode. + */ + if (vma->anon_vma) { + result = SCAN_PAGE_ANON; + goto unlock_next; + } + /* * When a vma is registered with uffd-wp, we can't * recycle the pmd pgtable because there can be pte * markers installed. Skip it only, so the rest mm/vma diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 92f670edbf51..55dc8b8b0616 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -2070,8 +2070,10 @@ static int __init kmemleak_boot_config(char *str) return -EINVAL; if (strcmp(str, "off") == 0) kmemleak_disable(); - else if (strcmp(str, "on") == 0) + else if (strcmp(str, "on") == 0) { kmemleak_skip_disable = 1; + stack_depot_want_early_init(); + } else return -EINVAL; return 0; @@ -2093,7 +2095,6 @@ void __init kmemleak_init(void) if (kmemleak_error) return; - stack_depot_init(); jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE); jiffies_scan_wait = msecs_to_jiffies(SECS_SCAN_WAIT * 1000); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ab457f0394ab..73afff8062f9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -63,7 +63,6 @@ #include <linux/resume_user_mode.h> #include <linux/psi.h> #include <linux/seq_buf.h> -#include <linux/parser.h> #include "internal.h" #include <net/sock.h> #include <net/ip.h> @@ -2393,8 +2392,7 @@ static unsigned long reclaim_high(struct mem_cgroup *memcg, psi_memstall_enter(&pflags); nr_reclaimed += try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, - MEMCG_RECLAIM_MAY_SWAP, - NULL); + MEMCG_RECLAIM_MAY_SWAP); psi_memstall_leave(&pflags); } while ((memcg = parent_mem_cgroup(memcg)) && !mem_cgroup_is_root(memcg)); @@ -2685,8 +2683,7 @@ retry: psi_memstall_enter(&pflags); nr_reclaimed = try_to_free_mem_cgroup_pages(mem_over_limit, nr_pages, - gfp_mask, reclaim_options, - NULL); + gfp_mask, reclaim_options); psi_memstall_leave(&pflags); if (mem_cgroup_margin(mem_over_limit) >= nr_pages) @@ -3506,8 +3503,7 @@ static int mem_cgroup_resize_max(struct mem_cgroup *memcg, } if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, - memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP, - NULL)) { + memsw ? 0 : MEMCG_RECLAIM_MAY_SWAP)) { ret = -EBUSY; break; } @@ -3618,8 +3614,7 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg) return -EINTR; if (!try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, - MEMCG_RECLAIM_MAY_SWAP, - NULL)) + MEMCG_RECLAIM_MAY_SWAP)) nr_retries--; } @@ -6429,8 +6424,7 @@ static ssize_t memory_high_write(struct kernfs_open_file *of, } reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high, - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, - NULL); + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP); if (!reclaimed && !nr_retries--) break; @@ -6479,8 +6473,7 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, if (nr_reclaims) { if (!try_to_free_mem_cgroup_pages(memcg, nr_pages - max, - GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP, - NULL)) + GFP_KERNEL, MEMCG_RECLAIM_MAY_SWAP)) nr_reclaims--; continue; } @@ -6603,54 +6596,21 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of, return nbytes; } -enum { - MEMORY_RECLAIM_NODES = 0, - MEMORY_RECLAIM_NULL, -}; - -static const match_table_t if_tokens = { - { MEMORY_RECLAIM_NODES, "nodes=%s" }, - { MEMORY_RECLAIM_NULL, NULL }, -}; - static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, size_t nbytes, loff_t off) { struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of)); unsigned int nr_retries = MAX_RECLAIM_RETRIES; unsigned long nr_to_reclaim, nr_reclaimed = 0; - unsigned int reclaim_options = MEMCG_RECLAIM_MAY_SWAP | - MEMCG_RECLAIM_PROACTIVE; - char *old_buf, *start; - substring_t args[MAX_OPT_ARGS]; - int token; - char value[256]; - nodemask_t nodemask = NODE_MASK_ALL; - - buf = strstrip(buf); - - old_buf = buf; - nr_to_reclaim = memparse(buf, &buf) / PAGE_SIZE; - if (buf == old_buf) - return -EINVAL; + unsigned int reclaim_options; + int err; buf = strstrip(buf); + err = page_counter_memparse(buf, "", &nr_to_reclaim); + if (err) + return err; - while ((start = strsep(&buf, " ")) != NULL) { - if (!strlen(start)) - continue; - token = match_token(start, if_tokens, args); - match_strlcpy(value, args, sizeof(value)); - switch (token) { - case MEMORY_RECLAIM_NODES: - if (nodelist_parse(value, nodemask) < 0) - return -EINVAL; - break; - default: - return -EINVAL; - } - } - + reclaim_options = MEMCG_RECLAIM_MAY_SWAP | MEMCG_RECLAIM_PROACTIVE; while (nr_reclaimed < nr_to_reclaim) { unsigned long reclaimed; @@ -6667,8 +6627,7 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf, reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_to_reclaim - nr_reclaimed, - GFP_KERNEL, reclaim_options, - &nodemask); + GFP_KERNEL, reclaim_options); if (!reclaimed && !nr_retries--) return -EAGAIN; diff --git a/mm/memory.c b/mm/memory.c index aad226daf41b..3e836fecd035 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -828,12 +828,8 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, return -EBUSY; return -ENOENT; } else if (is_pte_marker_entry(entry)) { - /* - * We're copying the pgtable should only because dst_vma has - * uffd-wp enabled, do sanity check. - */ - WARN_ON_ONCE(!userfaultfd_wp(dst_vma)); - set_pte_at(dst_mm, addr, dst_pte, pte); + if (is_swapin_error_entry(entry) || userfaultfd_wp(dst_vma)) + set_pte_at(dst_mm, addr, dst_pte, pte); return 0; } if (!userfaultfd_wp(dst_vma)) @@ -3629,8 +3625,12 @@ static vm_fault_t pte_marker_clear(struct vm_fault *vmf) /* * Be careful so that we will only recover a special uffd-wp pte into a * none pte. Otherwise it means the pte could have changed, so retry. + * + * This should also cover the case where e.g. the pte changed + * quickly from a PTE_MARKER_UFFD_WP into PTE_MARKER_SWAPIN_ERROR. + * So is_pte_marker() check is not enough to safely drop the pte. */ - if (is_pte_marker(*vmf->pte)) + if (pte_same(vmf->orig_pte, *vmf->pte)) pte_clear(vmf->vma->vm_mm, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); return 0; diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 02c8a712282f..f940395667c8 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -600,7 +600,8 @@ static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask, /* With MPOL_MF_MOVE, we migrate only unshared hugepage. */ if (flags & (MPOL_MF_MOVE_ALL) || - (flags & MPOL_MF_MOVE && page_mapcount(page) == 1)) { + (flags & MPOL_MF_MOVE && page_mapcount(page) == 1 && + !hugetlb_pmd_shared(pte))) { if (isolate_hugetlb(page, qp->pagelist) && (flags & MPOL_MF_STRICT)) /* diff --git a/mm/mprotect.c b/mm/mprotect.c index 908df12caa26..61cf60015a8b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -245,7 +245,13 @@ static unsigned long change_pte_range(struct mmu_gather *tlb, newpte = pte_swp_mksoft_dirty(newpte); if (pte_swp_uffd_wp(oldpte)) newpte = pte_swp_mkuffd_wp(newpte); - } else if (pte_marker_entry_uffd_wp(entry)) { + } else if (is_pte_marker_entry(entry)) { + /* + * Ignore swapin errors unconditionally, + * because any access should sigbus anyway. + */ + if (is_swapin_error_entry(entry)) + continue; /* * If this is uffd-wp pte marker and we'd like * to unprotect it, drop it; the next page diff --git a/mm/mremap.c b/mm/mremap.c index fe587c5d6591..930f65c315c0 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -1027,16 +1027,29 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, } /* - * Function vma_merge() is called on the extension we are adding to - * the already existing vma, vma_merge() will merge this extension with - * the already existing vma (expand operation itself) and possibly also - * with the next vma if it becomes adjacent to the expanded vma and - * otherwise compatible. + * Function vma_merge() is called on the extension we + * are adding to the already existing vma, vma_merge() + * will merge this extension with the already existing + * vma (expand operation itself) and possibly also with + * the next vma if it becomes adjacent to the expanded + * vma and otherwise compatible. + * + * However, vma_merge() can currently fail due to + * is_mergeable_vma() check for vm_ops->close (see the + * comment there). Yet this should not prevent vma + * expanding, so perform a simple expand for such vma. + * Ideally the check for close op should be only done + * when a vma would be actually removed due to a merge. */ - vma = vma_merge(mm, vma, extension_start, extension_end, + if (!vma->vm_ops || !vma->vm_ops->close) { + vma = vma_merge(mm, vma, extension_start, extension_end, vma->vm_flags, vma->anon_vma, vma->vm_file, extension_pgoff, vma_policy(vma), vma->vm_userfaultfd_ctx, anon_vma_name(vma)); + } else if (vma_adjust(vma, vma->vm_start, addr + new_len, + vma->vm_pgoff, NULL)) { + vma = NULL; + } if (!vma) { vm_unacct_memory(pages); ret = -ENOMEM; diff --git a/mm/swapfile.c b/mm/swapfile.c index 908a529bca12..4fa440e87cd6 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1100,6 +1100,7 @@ start_over: goto check_out; pr_debug("scan_swap_map of si %d failed to find offset\n", si->type); + cond_resched(); spin_lock(&swap_avail_lock); nextsi: diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637fcd8f9..bf3eedf0209c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3323,13 +3323,16 @@ void lru_gen_migrate_mm(struct mm_struct *mm) if (mem_cgroup_disabled()) return; + /* migration can happen before addition */ + if (!mm->lru_gen.memcg) + return; + rcu_read_lock(); memcg = mem_cgroup_from_task(task); rcu_read_unlock(); if (memcg == mm->lru_gen.memcg) return; - VM_WARN_ON_ONCE(!mm->lru_gen.memcg); VM_WARN_ON_ONCE(list_empty(&mm->lru_gen.list)); lru_gen_del_mm(mm); @@ -6754,8 +6757,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, unsigned long nr_pages, gfp_t gfp_mask, - unsigned int reclaim_options, - nodemask_t *nodemask) + unsigned int reclaim_options) { unsigned long nr_reclaimed; unsigned int noreclaim_flag; @@ -6770,7 +6772,6 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, .may_unmap = 1, .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP), .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE), - .nodemask = nodemask, }; /* * Traverse the ZONELIST_FALLBACK zonelist of the current node to put diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 9445bee6b014..702bc3fd687a 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -113,7 +113,23 @@ * have room for two bit at least. */ #define OBJ_ALLOCATED_TAG 1 -#define OBJ_TAG_BITS 1 + +#ifdef CONFIG_ZPOOL +/* + * The second least-significant bit in the object's header identifies if the + * value stored at the header is a deferred handle from the last reclaim + * attempt. + * + * As noted above, this is valid because we have room for two bits. + */ +#define OBJ_DEFERRED_HANDLE_TAG 2 +#define OBJ_TAG_BITS 2 +#define OBJ_TAG_MASK (OBJ_ALLOCATED_TAG | OBJ_DEFERRED_HANDLE_TAG) +#else +#define OBJ_TAG_BITS 1 +#define OBJ_TAG_MASK OBJ_ALLOCATED_TAG +#endif /* CONFIG_ZPOOL */ + #define OBJ_INDEX_BITS (BITS_PER_LONG - _PFN_BITS - OBJ_TAG_BITS) #define OBJ_INDEX_MASK ((_AC(1, UL) << OBJ_INDEX_BITS) - 1) @@ -222,6 +238,12 @@ struct link_free { * Handle of allocated object. */ unsigned long handle; +#ifdef CONFIG_ZPOOL + /* + * Deferred handle of a reclaimed object. + */ + unsigned long deferred_handle; +#endif }; }; @@ -272,8 +294,6 @@ struct zspage { /* links the zspage to the lru list in the pool */ struct list_head lru; bool under_reclaim; - /* list of unfreed handles whose objects have been reclaimed */ - unsigned long *deferred_handles; #endif struct zs_pool *pool; @@ -897,7 +917,8 @@ static unsigned long handle_to_obj(unsigned long handle) return *(unsigned long *)handle; } -static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) +static bool obj_tagged(struct page *page, void *obj, unsigned long *phandle, + int tag) { unsigned long handle; struct zspage *zspage = get_zspage(page); @@ -908,13 +929,27 @@ static bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) } else handle = *(unsigned long *)obj; - if (!(handle & OBJ_ALLOCATED_TAG)) + if (!(handle & tag)) return false; - *phandle = handle & ~OBJ_ALLOCATED_TAG; + /* Clear all tags before returning the handle */ + *phandle = handle & ~OBJ_TAG_MASK; return true; } +static inline bool obj_allocated(struct page *page, void *obj, unsigned long *phandle) +{ + return obj_tagged(page, obj, phandle, OBJ_ALLOCATED_TAG); +} + +#ifdef CONFIG_ZPOOL +static bool obj_stores_deferred_handle(struct page *page, void *obj, + unsigned long *phandle) +{ + return obj_tagged(page, obj, phandle, OBJ_DEFERRED_HANDLE_TAG); +} +#endif + static void reset_page(struct page *page) { __ClearPageMovable(page); @@ -946,22 +981,36 @@ unlock: } #ifdef CONFIG_ZPOOL +static unsigned long find_deferred_handle_obj(struct size_class *class, + struct page *page, int *obj_idx); + /* * Free all the deferred handles whose objects are freed in zs_free. */ -static void free_handles(struct zs_pool *pool, struct zspage *zspage) +static void free_handles(struct zs_pool *pool, struct size_class *class, + struct zspage *zspage) { - unsigned long handle = (unsigned long)zspage->deferred_handles; + int obj_idx = 0; + struct page *page = get_first_page(zspage); + unsigned long handle; - while (handle) { - unsigned long nxt_handle = handle_to_obj(handle); + while (1) { + handle = find_deferred_handle_obj(class, page, &obj_idx); + if (!handle) { + page = get_next_page(page); + if (!page) + break; + obj_idx = 0; + continue; + } cache_free_handle(pool, handle); - handle = nxt_handle; + obj_idx++; } } #else -static inline void free_handles(struct zs_pool *pool, struct zspage *zspage) {} +static inline void free_handles(struct zs_pool *pool, struct size_class *class, + struct zspage *zspage) {} #endif static void __free_zspage(struct zs_pool *pool, struct size_class *class, @@ -979,7 +1028,7 @@ static void __free_zspage(struct zs_pool *pool, struct size_class *class, VM_BUG_ON(fg != ZS_EMPTY); /* Free all deferred handles from zs_free */ - free_handles(pool, zspage); + free_handles(pool, class, zspage); next = page = get_first_page(zspage); do { @@ -1067,7 +1116,6 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) #ifdef CONFIG_ZPOOL INIT_LIST_HEAD(&zspage->lru); zspage->under_reclaim = false; - zspage->deferred_handles = NULL; #endif set_freeobj(zspage, 0); @@ -1568,7 +1616,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) } EXPORT_SYMBOL_GPL(zs_malloc); -static void obj_free(int class_size, unsigned long obj) +static void obj_free(int class_size, unsigned long obj, unsigned long *handle) { struct link_free *link; struct zspage *zspage; @@ -1582,15 +1630,29 @@ static void obj_free(int class_size, unsigned long obj) zspage = get_zspage(f_page); vaddr = kmap_atomic(f_page); - - /* Insert this object in containing zspage's freelist */ link = (struct link_free *)(vaddr + f_offset); - if (likely(!ZsHugePage(zspage))) - link->next = get_freeobj(zspage) << OBJ_TAG_BITS; - else - f_page->index = 0; + + if (handle) { +#ifdef CONFIG_ZPOOL + /* Stores the (deferred) handle in the object's header */ + *handle |= OBJ_DEFERRED_HANDLE_TAG; + *handle &= ~OBJ_ALLOCATED_TAG; + + if (likely(!ZsHugePage(zspage))) + link->deferred_handle = *handle; + else + f_page->index = *handle; +#endif + } else { + /* Insert this object in containing zspage's freelist */ + if (likely(!ZsHugePage(zspage))) + link->next = get_freeobj(zspage) << OBJ_TAG_BITS; + else + f_page->index = 0; + set_freeobj(zspage, f_objidx); + } + kunmap_atomic(vaddr); - set_freeobj(zspage, f_objidx); mod_zspage_inuse(zspage, -1); } @@ -1615,7 +1677,6 @@ void zs_free(struct zs_pool *pool, unsigned long handle) zspage = get_zspage(f_page); class = zspage_class(pool, zspage); - obj_free(class->size, obj); class_stat_dec(class, OBJ_USED, 1); #ifdef CONFIG_ZPOOL @@ -1624,15 +1685,15 @@ void zs_free(struct zs_pool *pool, unsigned long handle) * Reclaim needs the handles during writeback. It'll free * them along with the zspage when it's done with them. * - * Record current deferred handle at the memory location - * whose address is given by handle. + * Record current deferred handle in the object's header. */ - record_obj(handle, (unsigned long)zspage->deferred_handles); - zspage->deferred_handles = (unsigned long *)handle; + obj_free(class->size, obj, &handle); spin_unlock(&pool->lock); return; } #endif + obj_free(class->size, obj, NULL); + fullness = fix_fullness_group(class, zspage); if (fullness == ZS_EMPTY) free_zspage(pool, class, zspage); @@ -1713,11 +1774,11 @@ static void zs_object_copy(struct size_class *class, unsigned long dst, } /* - * Find alloced object in zspage from index object and + * Find object with a certain tag in zspage from index object and * return handle. */ -static unsigned long find_alloced_obj(struct size_class *class, - struct page *page, int *obj_idx) +static unsigned long find_tagged_obj(struct size_class *class, + struct page *page, int *obj_idx, int tag) { unsigned int offset; int index = *obj_idx; @@ -1728,7 +1789,7 @@ static unsigned long find_alloced_obj(struct size_class *class, offset += class->size * index; while (offset < PAGE_SIZE) { - if (obj_allocated(page, addr + offset, &handle)) + if (obj_tagged(page, addr + offset, &handle, tag)) break; offset += class->size; @@ -1742,6 +1803,28 @@ static unsigned long find_alloced_obj(struct size_class *class, return handle; } +/* + * Find alloced object in zspage from index object and + * return handle. + */ +static unsigned long find_alloced_obj(struct size_class *class, + struct page *page, int *obj_idx) +{ + return find_tagged_obj(class, page, obj_idx, OBJ_ALLOCATED_TAG); +} + +#ifdef CONFIG_ZPOOL +/* + * Find object storing a deferred handle in header in zspage from index object + * and return handle. + */ +static unsigned long find_deferred_handle_obj(struct size_class *class, + struct page *page, int *obj_idx) +{ + return find_tagged_obj(class, page, obj_idx, OBJ_DEFERRED_HANDLE_TAG); +} +#endif + struct zs_compact_control { /* Source spage for migration which could be a subpage of zspage */ struct page *s_page; @@ -1784,7 +1867,7 @@ static int migrate_zspage(struct zs_pool *pool, struct size_class *class, zs_object_copy(class, free_obj, used_obj); obj_idx++; record_obj(handle, free_obj); - obj_free(class->size, used_obj); + obj_free(class->size, used_obj, NULL); } /* Remember last position in this iteration */ @@ -2478,6 +2561,90 @@ void zs_destroy_pool(struct zs_pool *pool) EXPORT_SYMBOL_GPL(zs_destroy_pool); #ifdef CONFIG_ZPOOL +static void restore_freelist(struct zs_pool *pool, struct size_class *class, + struct zspage *zspage) +{ + unsigned int obj_idx = 0; + unsigned long handle, off = 0; /* off is within-page offset */ + struct page *page = get_first_page(zspage); + struct link_free *prev_free = NULL; + void *prev_page_vaddr = NULL; + + /* in case no free object found */ + set_freeobj(zspage, (unsigned int)(-1UL)); + + while (page) { + void *vaddr = kmap_atomic(page); + struct page *next_page; + + while (off < PAGE_SIZE) { + void *obj_addr = vaddr + off; + + /* skip allocated object */ + if (obj_allocated(page, obj_addr, &handle)) { + obj_idx++; + off += class->size; + continue; + } + + /* free deferred handle from reclaim attempt */ + if (obj_stores_deferred_handle(page, obj_addr, &handle)) + cache_free_handle(pool, handle); + + if (prev_free) + prev_free->next = obj_idx << OBJ_TAG_BITS; + else /* first free object found */ + set_freeobj(zspage, obj_idx); + + prev_free = (struct link_free *)vaddr + off / sizeof(*prev_free); + /* if last free object in a previous page, need to unmap */ + if (prev_page_vaddr) { + kunmap_atomic(prev_page_vaddr); + prev_page_vaddr = NULL; + } + + obj_idx++; + off += class->size; + } + + /* + * Handle the last (full or partial) object on this page. + */ + next_page = get_next_page(page); + if (next_page) { + if (!prev_free || prev_page_vaddr) { + /* + * There is no free object in this page, so we can safely + * unmap it. + */ + kunmap_atomic(vaddr); + } else { + /* update prev_page_vaddr since prev_free is on this page */ + prev_page_vaddr = vaddr; + } + } else { /* this is the last page */ + if (prev_free) { + /* + * Reset OBJ_TAG_BITS bit to last link to tell + * whether it's allocated object or not. + */ + prev_free->next = -1UL << OBJ_TAG_BITS; + } + + /* unmap previous page (if not done yet) */ + if (prev_page_vaddr) { + kunmap_atomic(prev_page_vaddr); + prev_page_vaddr = NULL; + } + + kunmap_atomic(vaddr); + } + + page = next_page; + off %= PAGE_SIZE; + } +} + static int zs_reclaim_page(struct zs_pool *pool, unsigned int retries) { int i, obj_idx, ret = 0; @@ -2561,6 +2728,12 @@ next: return 0; } + /* + * Eviction fails on one of the handles, so we need to restore zspage. + * We need to rebuild its freelist (and free stored deferred handles), + * put it back to the correct size class, and add it to the LRU list. + */ + restore_freelist(pool, class, zspage); putback_zspage(class, zspage); list_add(&zspage->lru, &pool->lru); unlock_zspage(zspage); diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c index acf563fbdfd9..17b946f9ba31 100644 --- a/net/bluetooth/hci_conn.c +++ b/net/bluetooth/hci_conn.c @@ -1061,8 +1061,15 @@ int hci_conn_del(struct hci_conn *conn) if (conn->type == ACL_LINK) { struct hci_conn *sco = conn->link; - if (sco) + if (sco) { sco->link = NULL; + /* Due to race, SCO connection might be not established + * yet at this point. Delete it now, otherwise it is + * possible for it to be stuck and can't be deleted. + */ + if (sco->handle == HCI_CONN_HANDLE_UNSET) + hci_conn_del(sco); + } /* Unacked frames */ hdev->acl_cnt += conn->sent; @@ -1243,6 +1250,8 @@ static void create_le_conn_complete(struct hci_dev *hdev, void *data, int err) if (conn != hci_lookup_le_connect(hdev)) goto done; + /* Flush to make sure we send create conn cancel command if needed */ + flush_delayed_work(&conn->le_conn_timeout); hci_conn_failed(conn, bt_status(err)); done: @@ -1981,16 +1990,14 @@ static void hci_iso_qos_setup(struct hci_dev *hdev, struct hci_conn *conn, qos->latency = conn->le_conn_latency; } -static struct hci_conn *hci_bind_bis(struct hci_conn *conn, - struct bt_iso_qos *qos) +static void hci_bind_bis(struct hci_conn *conn, + struct bt_iso_qos *qos) { /* Update LINK PHYs according to QoS preference */ conn->le_tx_phy = qos->out.phy; conn->le_tx_phy = qos->out.phy; conn->iso_qos = *qos; conn->state = BT_BOUND; - - return conn; } static int create_big_sync(struct hci_dev *hdev, void *data) @@ -2119,11 +2126,7 @@ struct hci_conn *hci_connect_bis(struct hci_dev *hdev, bdaddr_t *dst, if (IS_ERR(conn)) return conn; - conn = hci_bind_bis(conn, qos); - if (!conn) { - hci_conn_drop(conn); - return ERR_PTR(-ENOMEM); - } + hci_bind_bis(conn, qos); /* Add Basic Announcement into Peridic Adv Data if BASE is set */ if (base_len && base) { diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c index a3e0dc6a6e73..adfc3ea06d08 100644 --- a/net/bluetooth/l2cap_core.c +++ b/net/bluetooth/l2cap_core.c @@ -2683,14 +2683,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len) if (IS_ERR(skb)) return PTR_ERR(skb); - /* Channel lock is released before requesting new skb and then - * reacquired thus we need to recheck channel state. - */ - if (chan->state != BT_CONNECTED) { - kfree_skb(skb); - return -ENOTCONN; - } - l2cap_do_send(chan, skb); return len; } @@ -2735,14 +2727,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len) if (IS_ERR(skb)) return PTR_ERR(skb); - /* Channel lock is released before requesting new skb and then - * reacquired thus we need to recheck channel state. - */ - if (chan->state != BT_CONNECTED) { - kfree_skb(skb); - return -ENOTCONN; - } - l2cap_do_send(chan, skb); err = len; break; @@ -2763,14 +2747,6 @@ int l2cap_chan_send(struct l2cap_chan *chan, struct msghdr *msg, size_t len) */ err = l2cap_segment_sdu(chan, &seg_queue, msg, len); - /* The channel could have been closed while segmenting, - * check that it is still connected. - */ - if (chan->state != BT_CONNECTED) { - __skb_queue_purge(&seg_queue); - err = -ENOTCONN; - } - if (err) break; diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c index ca8f07f3542b..eebe256104bc 100644 --- a/net/bluetooth/l2cap_sock.c +++ b/net/bluetooth/l2cap_sock.c @@ -1624,6 +1624,14 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan, if (!skb) return ERR_PTR(err); + /* Channel lock is released before requesting new skb and then + * reacquired thus we need to recheck channel state. + */ + if (chan->state != BT_CONNECTED) { + kfree_skb(skb); + return ERR_PTR(-ENOTCONN); + } + skb->priority = sk->sk_priority; bt_cb(skb)->l2cap.chan = chan; diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c index d2ea8e19aa1b..7add66f30e4d 100644 --- a/net/bluetooth/mgmt.c +++ b/net/bluetooth/mgmt.c @@ -859,6 +859,12 @@ static u32 get_supported_settings(struct hci_dev *hdev) hdev->set_bdaddr) settings |= MGMT_SETTING_CONFIGURATION; + if (cis_central_capable(hdev)) + settings |= MGMT_SETTING_CIS_CENTRAL; + + if (cis_peripheral_capable(hdev)) + settings |= MGMT_SETTING_CIS_PERIPHERAL; + settings |= MGMT_SETTING_PHY_CONFIGURATION; return settings; @@ -932,6 +938,12 @@ static u32 get_current_settings(struct hci_dev *hdev) if (hci_dev_test_flag(hdev, HCI_WIDEBAND_SPEECH_ENABLED)) settings |= MGMT_SETTING_WIDEBAND_SPEECH; + if (cis_central_capable(hdev)) + settings |= MGMT_SETTING_CIS_CENTRAL; + + if (cis_peripheral_capable(hdev)) + settings |= MGMT_SETTING_CIS_PERIPHERAL; + return settings; } diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index 8da0d73b368e..b766a84c8536 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -234,7 +234,7 @@ static int xdp_recv_frames(struct xdp_frame **frames, int nframes, int i, n; LIST_HEAD(list); - n = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, nframes, (void **)skbs); + n = kmem_cache_alloc_bulk(skbuff_cache, gfp, nframes, (void **)skbs); if (unlikely(n == 0)) { for (i = 0; i < nframes; i++) xdp_return_frame(frames[i]); @@ -484,7 +484,7 @@ out: __diag_push(); __diag_ignore_all("-Wmissing-prototypes", "Global functions as their definitions will be in vmlinux BTF"); -int noinline bpf_fentry_test1(int a) +__bpf_kfunc int bpf_fentry_test1(int a) { return a + 1; } @@ -529,27 +529,35 @@ int noinline bpf_fentry_test8(struct bpf_fentry_test_t *arg) return (long)arg->a; } -int noinline bpf_modify_return_test(int a, int *b) +__bpf_kfunc int bpf_modify_return_test(int a, int *b) { *b += 1; return a + *b; } -u64 noinline bpf_kfunc_call_test1(struct sock *sk, u32 a, u64 b, u32 c, u64 d) +__bpf_kfunc u64 bpf_kfunc_call_test1(struct sock *sk, u32 a, u64 b, u32 c, u64 d) { return a + b + c + d; } -int noinline bpf_kfunc_call_test2(struct sock *sk, u32 a, u32 b) +__bpf_kfunc int bpf_kfunc_call_test2(struct sock *sk, u32 a, u32 b) { return a + b; } -struct sock * noinline bpf_kfunc_call_test3(struct sock *sk) +__bpf_kfunc struct sock *bpf_kfunc_call_test3(struct sock *sk) { return sk; } +long noinline bpf_kfunc_call_test4(signed char a, short b, int c, long d) +{ + /* Provoke the compiler to assume that the caller has sign-extended a, + * b and c on platforms where this is required (e.g. s390x). + */ + return (long)a + (long)b + (long)c + d; +} + struct prog_test_member1 { int a; }; @@ -574,21 +582,21 @@ static struct prog_test_ref_kfunc prog_test_struct = { .cnt = REFCOUNT_INIT(1), }; -noinline struct prog_test_ref_kfunc * +__bpf_kfunc struct prog_test_ref_kfunc * bpf_kfunc_call_test_acquire(unsigned long *scalar_ptr) { refcount_inc(&prog_test_struct.cnt); return &prog_test_struct; } -noinline struct prog_test_member * +__bpf_kfunc struct prog_test_member * bpf_kfunc_call_memb_acquire(void) { WARN_ON_ONCE(1); return NULL; } -noinline void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) +__bpf_kfunc void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) { if (!p) return; @@ -596,11 +604,11 @@ noinline void bpf_kfunc_call_test_release(struct prog_test_ref_kfunc *p) refcount_dec(&p->cnt); } -noinline void bpf_kfunc_call_memb_release(struct prog_test_member *p) +__bpf_kfunc void bpf_kfunc_call_memb_release(struct prog_test_member *p) { } -noinline void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p) +__bpf_kfunc void bpf_kfunc_call_memb1_release(struct prog_test_member1 *p) { WARN_ON_ONCE(1); } @@ -613,12 +621,14 @@ static int *__bpf_kfunc_call_test_get_mem(struct prog_test_ref_kfunc *p, const i return (int *)p; } -noinline int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) +__bpf_kfunc int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, + const int rdwr_buf_size) { return __bpf_kfunc_call_test_get_mem(p, rdwr_buf_size); } -noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) +__bpf_kfunc int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, + const int rdonly_buf_size) { return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size); } @@ -628,16 +638,17 @@ noinline int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, * Acquire functions must return struct pointers, so these ones are * failing. */ -noinline int *bpf_kfunc_call_test_acq_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) +__bpf_kfunc int *bpf_kfunc_call_test_acq_rdonly_mem(struct prog_test_ref_kfunc *p, + const int rdonly_buf_size) { return __bpf_kfunc_call_test_get_mem(p, rdonly_buf_size); } -noinline void bpf_kfunc_call_int_mem_release(int *p) +__bpf_kfunc void bpf_kfunc_call_int_mem_release(int *p) { } -noinline struct prog_test_ref_kfunc * +__bpf_kfunc struct prog_test_ref_kfunc * bpf_kfunc_call_test_kptr_get(struct prog_test_ref_kfunc **pp, int a, int b) { struct prog_test_ref_kfunc *p = READ_ONCE(*pp); @@ -686,48 +697,53 @@ struct prog_test_fail3 { char arr2[]; }; -noinline void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) +__bpf_kfunc void bpf_kfunc_call_test_pass_ctx(struct __sk_buff *skb) +{ +} + +__bpf_kfunc void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) { } -noinline void bpf_kfunc_call_test_pass1(struct prog_test_pass1 *p) +__bpf_kfunc void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) { } -noinline void bpf_kfunc_call_test_pass2(struct prog_test_pass2 *p) +__bpf_kfunc void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p) { } -noinline void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p) +__bpf_kfunc void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p) { } -noinline void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p) +__bpf_kfunc void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p) { } -noinline void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p) +__bpf_kfunc void bpf_kfunc_call_test_mem_len_pass1(void *mem, int mem__sz) { } -noinline void bpf_kfunc_call_test_mem_len_pass1(void *mem, int mem__sz) +__bpf_kfunc void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len) { } -noinline void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len) +__bpf_kfunc void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len) { } -noinline void bpf_kfunc_call_test_mem_len_fail2(u64 *mem, int len) +__bpf_kfunc void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) { } -noinline void bpf_kfunc_call_test_ref(struct prog_test_ref_kfunc *p) +__bpf_kfunc void bpf_kfunc_call_test_destructive(void) { } -noinline void bpf_kfunc_call_test_destructive(void) +__bpf_kfunc static u32 bpf_kfunc_call_test_static_unused_arg(u32 arg, u32 unused) { + return arg; } __diag_pop(); @@ -746,6 +762,7 @@ BTF_SET8_START(test_sk_check_kfunc_ids) BTF_ID_FLAGS(func, bpf_kfunc_call_test1) BTF_ID_FLAGS(func, bpf_kfunc_call_test2) BTF_ID_FLAGS(func, bpf_kfunc_call_test3) +BTF_ID_FLAGS(func, bpf_kfunc_call_test4) BTF_ID_FLAGS(func, bpf_kfunc_call_test_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_memb_acquire, KF_ACQUIRE | KF_RET_NULL) BTF_ID_FLAGS(func, bpf_kfunc_call_test_release, KF_RELEASE) @@ -767,6 +784,7 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail1) BTF_ID_FLAGS(func, bpf_kfunc_call_test_mem_len_fail2) BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, KF_TRUSTED_ARGS) BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) +BTF_ID_FLAGS(func, bpf_kfunc_call_test_static_unused_arg) BTF_SET8_END(test_sk_check_kfunc_ids) static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c index 9f22ebfdc518..25c48d81a597 100644 --- a/net/bridge/br_mdb.c +++ b/net/bridge/br_mdb.c @@ -259,7 +259,7 @@ static int __mdb_fill_info(struct sk_buff *skb, #endif } else { ether_addr_copy(e.addr.u.mac_addr, mp->addr.dst.mac_addr); - e.state = MDB_PG_FLAGS_PERMANENT; + e.state = MDB_PERMANENT; } e.addr.proto = mp->addr.proto; nest_ent = nla_nest_start_noflag(skb, @@ -421,8 +421,6 @@ static int br_mdb_dump(struct sk_buff *skb, struct netlink_callback *cb) rcu_read_lock(); - cb->seq = net->dev_base_seq; - for_each_netdev_rcu(net, dev) { if (netif_is_bridge_master(dev)) { struct net_bridge *br = netdev_priv(dev); @@ -685,51 +683,58 @@ static const struct nla_policy br_mdbe_attrs_pol[MDBE_ATTR_MAX + 1] = { [MDBE_ATTR_RTPROT] = NLA_POLICY_MIN(NLA_U8, RTPROT_STATIC), }; -static bool is_valid_mdb_entry(struct br_mdb_entry *entry, - struct netlink_ext_ack *extack) +static int validate_mdb_entry(const struct nlattr *attr, + struct netlink_ext_ack *extack) { + struct br_mdb_entry *entry = nla_data(attr); + + if (nla_len(attr) != sizeof(struct br_mdb_entry)) { + NL_SET_ERR_MSG_MOD(extack, "Invalid MDBA_SET_ENTRY attribute length"); + return -EINVAL; + } + if (entry->ifindex == 0) { NL_SET_ERR_MSG_MOD(extack, "Zero entry ifindex is not allowed"); - return false; + return -EINVAL; } if (entry->addr.proto == htons(ETH_P_IP)) { if (!ipv4_is_multicast(entry->addr.u.ip4)) { NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is not multicast"); - return false; + return -EINVAL; } if (ipv4_is_local_multicast(entry->addr.u.ip4)) { NL_SET_ERR_MSG_MOD(extack, "IPv4 entry group address is local multicast"); - return false; + return -EINVAL; } #if IS_ENABLED(CONFIG_IPV6) } else if (entry->addr.proto == htons(ETH_P_IPV6)) { if (ipv6_addr_is_ll_all_nodes(&entry->addr.u.ip6)) { NL_SET_ERR_MSG_MOD(extack, "IPv6 entry group address is link-local all nodes"); - return false; + return -EINVAL; } #endif } else if (entry->addr.proto == 0) { /* L2 mdb */ if (!is_multicast_ether_addr(entry->addr.u.mac_addr)) { NL_SET_ERR_MSG_MOD(extack, "L2 entry group is not multicast"); - return false; + return -EINVAL; } } else { NL_SET_ERR_MSG_MOD(extack, "Unknown entry protocol"); - return false; + return -EINVAL; } if (entry->state != MDB_PERMANENT && entry->state != MDB_TEMPORARY) { NL_SET_ERR_MSG_MOD(extack, "Unknown entry state"); - return false; + return -EINVAL; } if (entry->vid >= VLAN_VID_MASK) { NL_SET_ERR_MSG_MOD(extack, "Invalid entry VLAN id"); - return false; + return -EINVAL; } - return true; + return 0; } static bool is_valid_mdb_source(struct nlattr *attr, __be16 proto, @@ -1294,6 +1299,14 @@ static int br_mdb_config_attrs_init(struct nlattr *set_attrs, return 0; } +static const struct nla_policy mdba_policy[MDBA_SET_ENTRY_MAX + 1] = { + [MDBA_SET_ENTRY_UNSPEC] = { .strict_start_type = MDBA_SET_ENTRY_ATTRS + 1 }, + [MDBA_SET_ENTRY] = NLA_POLICY_VALIDATE_FN(NLA_BINARY, + validate_mdb_entry, + sizeof(struct br_mdb_entry)), + [MDBA_SET_ENTRY_ATTRS] = { .type = NLA_NESTED }, +}; + static int br_mdb_config_init(struct net *net, const struct nlmsghdr *nlh, struct br_mdb_config *cfg, struct netlink_ext_ack *extack) @@ -1304,7 +1317,7 @@ static int br_mdb_config_init(struct net *net, const struct nlmsghdr *nlh, int err; err = nlmsg_parse_deprecated(nlh, sizeof(*bpm), tb, - MDBA_SET_ENTRY_MAX, NULL, extack); + MDBA_SET_ENTRY_MAX, mdba_policy, extack); if (err) return err; @@ -1346,14 +1359,8 @@ static int br_mdb_config_init(struct net *net, const struct nlmsghdr *nlh, NL_SET_ERR_MSG_MOD(extack, "Missing MDBA_SET_ENTRY attribute"); return -EINVAL; } - if (nla_len(tb[MDBA_SET_ENTRY]) != sizeof(struct br_mdb_entry)) { - NL_SET_ERR_MSG_MOD(extack, "Invalid MDBA_SET_ENTRY attribute length"); - return -EINVAL; - } cfg->entry = nla_data(tb[MDBA_SET_ENTRY]); - if (!is_valid_mdb_entry(cfg->entry, extack)) - return -EINVAL; if (cfg->entry->ifindex != cfg->br->dev->ifindex) { struct net_device *pdev; diff --git a/net/can/j1939/address-claim.c b/net/can/j1939/address-claim.c index f33c47327927..ca4ad6cdd5cb 100644 --- a/net/can/j1939/address-claim.c +++ b/net/can/j1939/address-claim.c @@ -165,6 +165,46 @@ static void j1939_ac_process(struct j1939_priv *priv, struct sk_buff *skb) * leaving this function. */ ecu = j1939_ecu_get_by_name_locked(priv, name); + + if (ecu && ecu->addr == skcb->addr.sa) { + /* The ISO 11783-5 standard, in "4.5.2 - Address claim + * requirements", states: + * d) No CF shall begin, or resume, transmission on the + * network until 250 ms after it has successfully claimed + * an address except when responding to a request for + * address-claimed. + * + * But "Figure 6" and "Figure 7" in "4.5.4.2 - Address-claim + * prioritization" show that the CF begins the transmission + * after 250 ms from the first AC (address-claimed) message + * even if it sends another AC message during that time window + * to resolve the address contention with another CF. + * + * As stated in "4.4.2.3 - Address-claimed message": + * In order to successfully claim an address, the CF sending + * an address claimed message shall not receive a contending + * claim from another CF for at least 250 ms. + * + * As stated in "4.4.3.2 - NAME management (NM) message": + * 1) A commanding CF can + * d) request that a CF with a specified NAME transmit + * the address-claimed message with its current NAME. + * 2) A target CF shall + * d) send an address-claimed message in response to a + * request for a matching NAME + * + * Taking the above arguments into account, the 250 ms wait is + * requested only during network initialization. + * + * Do not restart the timer on AC message if both the NAME and + * the address match and so if the address has already been + * claimed (timer has expired) or the AC message has been sent + * to resolve the contention with another CF (timer is still + * running). + */ + goto out_ecu_put; + } + if (!ecu && j1939_address_is_unicast(skcb->addr.sa)) ecu = j1939_ecu_create_locked(priv, name); diff --git a/net/core/Makefile b/net/core/Makefile index 10edd66a8a37..8f367813bc68 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -12,7 +12,8 @@ obj-$(CONFIG_SYSCTL) += sysctl_net_core.o obj-y += dev.o dev_addr_lists.o dst.o netevent.o \ neighbour.o rtnetlink.o utils.o link_watch.o filter.o \ sock_diag.o dev_ioctl.o tso.o sock_reuseport.o \ - fib_notifier.o xdp.o flow_offload.o gro.o + fib_notifier.o xdp.o flow_offload.o gro.o \ + netdev-genl.o netdev-genl-gen.o obj-$(CONFIG_NETDEV_ADDR_LIST_TEST) += dev_addr_lists_test.o diff --git a/net/core/dev.c b/net/core/dev.c index bb42150a38ec..7307a0c15c9f 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -1614,6 +1614,7 @@ const char *netdev_cmd_to_name(enum netdev_cmd cmd) N(SVLAN_FILTER_PUSH_INFO) N(SVLAN_FILTER_DROP_INFO) N(PRE_CHANGEADDR) N(OFFLOAD_XSTATS_ENABLE) N(OFFLOAD_XSTATS_DISABLE) N(OFFLOAD_XSTATS_REPORT_USED) N(OFFLOAD_XSTATS_REPORT_DELTA) + N(XDP_FEAT_CHANGE) } #undef N return "UNKNOWN_NETDEV_EVENT"; diff --git a/net/core/dev.h b/net/core/dev.h index a065b7571441..e075e198092c 100644 --- a/net/core/dev.h +++ b/net/core/dev.h @@ -9,6 +9,7 @@ struct net_device; struct netdev_bpf; struct netdev_phys_item_id; struct netlink_ext_ack; +struct cpumask; /* Random bits of netdevice that don't need to be exposed */ #define FLOW_LIMIT_HISTORY (1 << 7) /* must be ^2 and !overflow buckets */ @@ -134,4 +135,5 @@ static inline void netif_set_gro_ipv4_max_size(struct net_device *dev, WRITE_ONCE(dev->gro_ipv4_max_size, size); } +int rps_cpumask_housekeeping(struct cpumask *mask); #endif diff --git a/net/core/filter.c b/net/core/filter.c index d8f9b53f3db6..2ce06a72a5ba 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4318,16 +4318,13 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; - /* XDP_REDIRECT is not fully supported yet for xdp frags since - * not all XDP capable drivers can map non-linear xdp_frame in - * ndo_xdp_xmit. - */ - if (unlikely(xdp_buff_has_frags(xdp) && - map_type != BPF_MAP_TYPE_CPUMAP)) - return -EOPNOTSUPP; + if (map_type == BPF_MAP_TYPE_XSKMAP) { + /* XDP_REDIRECT is not supported AF_XDP yet. */ + if (unlikely(xdp_buff_has_frags(xdp))) + return -EOPNOTSUPP; - if (map_type == BPF_MAP_TYPE_XSKMAP) return __xdp_do_redirect_xsk(ri, dev, xdp, xdp_prog); + } return __xdp_do_redirect_frame(ri, dev, xdp_convert_buff_to_frame(xdp), xdp_prog); @@ -7536,7 +7533,7 @@ static const struct bpf_func_proto bpf_tcp_raw_gen_syncookie_ipv4_proto = { .arg1_type = ARG_PTR_TO_FIXED_SIZE_MEM, .arg1_size = sizeof(struct iphdr), .arg2_type = ARG_PTR_TO_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE_OR_ZERO, }; BPF_CALL_3(bpf_tcp_raw_gen_syncookie_ipv6, struct ipv6hdr *, iph, @@ -7568,7 +7565,7 @@ static const struct bpf_func_proto bpf_tcp_raw_gen_syncookie_ipv6_proto = { .arg1_type = ARG_PTR_TO_FIXED_SIZE_MEM, .arg1_size = sizeof(struct ipv6hdr), .arg2_type = ARG_PTR_TO_MEM, - .arg3_type = ARG_CONST_SIZE, + .arg3_type = ARG_CONST_SIZE_OR_ZERO, }; BPF_CALL_2(bpf_tcp_raw_check_syncookie_ipv4, struct iphdr *, iph, diff --git a/net/core/neighbour.c b/net/core/neighbour.c index 57258110bccd..6798f6d2423b 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -269,7 +269,7 @@ static int neigh_forced_gc(struct neigh_table *tbl) (n->nud_state == NUD_NOARP) || (tbl->is_multicast && tbl->is_multicast(n->primary_key)) || - time_after(tref, n->updated)) + !time_in_range(n->updated, tref, jiffies)) remove = true; write_unlock(&n->lock); @@ -289,7 +289,17 @@ static int neigh_forced_gc(struct neigh_table *tbl) static void neigh_add_timer(struct neighbour *n, unsigned long when) { + /* Use safe distance from the jiffies - LONG_MAX point while timer + * is running in DELAY/PROBE state but still show to user space + * large times in the past. + */ + unsigned long mint = jiffies - (LONG_MAX - 86400 * HZ); + neigh_hold(n); + if (!time_in_range(n->confirmed, mint, jiffies)) + n->confirmed = mint; + if (time_before(n->used, n->confirmed)) + n->used = n->confirmed; if (unlikely(mod_timer(&n->timer, when))) { printk("NEIGH: BUG, double timer add, state is %x\n", n->nud_state); @@ -1001,12 +1011,14 @@ static void neigh_periodic_work(struct work_struct *work) goto next_elt; } - if (time_before(n->used, n->confirmed)) + if (time_before(n->used, n->confirmed) && + time_is_before_eq_jiffies(n->confirmed)) n->used = n->confirmed; if (refcount_read(&n->refcnt) == 1 && (state == NUD_FAILED || - time_after(jiffies, n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) { + !time_in_range_open(jiffies, n->used, + n->used + NEIGH_VAR(n->parms, GC_STALETIME)))) { *np = n->next; neigh_mark_dead(n); write_unlock(&n->lock); diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index ca55dd747d6c..4b361ac6a252 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -831,42 +831,18 @@ static ssize_t show_rps_map(struct netdev_rx_queue *queue, char *buf) return len < PAGE_SIZE ? len : -EINVAL; } -static ssize_t store_rps_map(struct netdev_rx_queue *queue, - const char *buf, size_t len) +static int netdev_rx_queue_set_rps_mask(struct netdev_rx_queue *queue, + cpumask_var_t mask) { - struct rps_map *old_map, *map; - cpumask_var_t mask; - int err, cpu, i; static DEFINE_MUTEX(rps_map_mutex); - - if (!capable(CAP_NET_ADMIN)) - return -EPERM; - - if (!alloc_cpumask_var(&mask, GFP_KERNEL)) - return -ENOMEM; - - err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits); - if (err) { - free_cpumask_var(mask); - return err; - } - - if (!cpumask_empty(mask)) { - cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN)); - cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ)); - if (cpumask_empty(mask)) { - free_cpumask_var(mask); - return -EINVAL; - } - } + struct rps_map *old_map, *map; + int cpu, i; map = kzalloc(max_t(unsigned int, RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES), GFP_KERNEL); - if (!map) { - free_cpumask_var(mask); + if (!map) return -ENOMEM; - } i = 0; for_each_cpu_and(cpu, mask, cpu_online_mask) @@ -893,9 +869,45 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue, if (old_map) kfree_rcu(old_map, rcu); + return 0; +} +int rps_cpumask_housekeeping(struct cpumask *mask) +{ + if (!cpumask_empty(mask)) { + cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_DOMAIN)); + cpumask_and(mask, mask, housekeeping_cpumask(HK_TYPE_WQ)); + if (cpumask_empty(mask)) + return -EINVAL; + } + return 0; +} + +static ssize_t store_rps_map(struct netdev_rx_queue *queue, + const char *buf, size_t len) +{ + cpumask_var_t mask; + int err; + + if (!capable(CAP_NET_ADMIN)) + return -EPERM; + + if (!alloc_cpumask_var(&mask, GFP_KERNEL)) + return -ENOMEM; + + err = bitmap_parse(buf, len, cpumask_bits(mask), nr_cpumask_bits); + if (err) + goto out; + + err = rps_cpumask_housekeeping(mask); + if (err) + goto out; + + err = netdev_rx_queue_set_rps_mask(queue, mask); + +out: free_cpumask_var(mask); - return len; + return err ? : len; } static ssize_t show_rps_dev_flow_table_cnt(struct netdev_rx_queue *queue, @@ -1071,6 +1083,13 @@ static int rx_queue_add_kobject(struct net_device *dev, int index) goto err; } +#if IS_ENABLED(CONFIG_RPS) && IS_ENABLED(CONFIG_SYSCTL) + if (!cpumask_empty(&rps_default_mask)) { + error = netdev_rx_queue_set_rps_mask(queue, &rps_default_mask); + if (error) + goto err; + } +#endif kobject_uevent(kobj, KOBJ_ADD); return error; diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c new file mode 100644 index 000000000000..48812ec843f5 --- /dev/null +++ b/net/core/netdev-genl-gen.c @@ -0,0 +1,48 @@ +// SPDX-License-Identifier: BSD-3-Clause +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/netdev.yaml */ +/* YNL-GEN kernel source */ + +#include <net/netlink.h> +#include <net/genetlink.h> + +#include "netdev-genl-gen.h" + +#include <linux/netdev.h> + +/* NETDEV_CMD_DEV_GET - do */ +static const struct nla_policy netdev_dev_get_nl_policy[NETDEV_A_DEV_IFINDEX + 1] = { + [NETDEV_A_DEV_IFINDEX] = NLA_POLICY_MIN(NLA_U32, 1), +}; + +/* Ops table for netdev */ +static const struct genl_split_ops netdev_nl_ops[2] = { + { + .cmd = NETDEV_CMD_DEV_GET, + .doit = netdev_nl_dev_get_doit, + .policy = netdev_dev_get_nl_policy, + .maxattr = NETDEV_A_DEV_IFINDEX, + .flags = GENL_CMD_CAP_DO, + }, + { + .cmd = NETDEV_CMD_DEV_GET, + .dumpit = netdev_nl_dev_get_dumpit, + .flags = GENL_CMD_CAP_DUMP, + }, +}; + +static const struct genl_multicast_group netdev_nl_mcgrps[] = { + [NETDEV_NLGRP_MGMT] = { "mgmt", }, +}; + +struct genl_family netdev_nl_family __ro_after_init = { + .name = NETDEV_FAMILY_NAME, + .version = NETDEV_FAMILY_VERSION, + .netnsok = true, + .parallel_ops = true, + .module = THIS_MODULE, + .split_ops = netdev_nl_ops, + .n_split_ops = ARRAY_SIZE(netdev_nl_ops), + .mcgrps = netdev_nl_mcgrps, + .n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps), +}; diff --git a/net/core/netdev-genl-gen.h b/net/core/netdev-genl-gen.h new file mode 100644 index 000000000000..b16dc7e026bb --- /dev/null +++ b/net/core/netdev-genl-gen.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/netdev.yaml */ +/* YNL-GEN kernel header */ + +#ifndef _LINUX_NETDEV_GEN_H +#define _LINUX_NETDEV_GEN_H + +#include <net/netlink.h> +#include <net/genetlink.h> + +#include <linux/netdev.h> + +int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info); +int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb); + +enum { + NETDEV_NLGRP_MGMT, +}; + +extern struct genl_family netdev_nl_family; + +#endif /* _LINUX_NETDEV_GEN_H */ diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c new file mode 100644 index 000000000000..a4270fafdf11 --- /dev/null +++ b/net/core/netdev-genl.c @@ -0,0 +1,179 @@ +// SPDX-License-Identifier: GPL-2.0-only + +#include <linux/netdevice.h> +#include <linux/notifier.h> +#include <linux/rtnetlink.h> +#include <net/net_namespace.h> +#include <net/sock.h> + +#include "netdev-genl-gen.h" + +static int +netdev_nl_dev_fill(struct net_device *netdev, struct sk_buff *rsp, + u32 portid, u32 seq, int flags, u32 cmd) +{ + void *hdr; + + hdr = genlmsg_put(rsp, portid, seq, &netdev_nl_family, flags, cmd); + if (!hdr) + return -EMSGSIZE; + + if (nla_put_u32(rsp, NETDEV_A_DEV_IFINDEX, netdev->ifindex) || + nla_put_u64_64bit(rsp, NETDEV_A_DEV_XDP_FEATURES, + netdev->xdp_features, NETDEV_A_DEV_PAD)) { + genlmsg_cancel(rsp, hdr); + return -EINVAL; + } + + genlmsg_end(rsp, hdr); + + return 0; +} + +static void +netdev_genl_dev_notify(struct net_device *netdev, int cmd) +{ + struct sk_buff *ntf; + + if (!genl_has_listeners(&netdev_nl_family, dev_net(netdev), + NETDEV_NLGRP_MGMT)) + return; + + ntf = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!ntf) + return; + + if (netdev_nl_dev_fill(netdev, ntf, 0, 0, 0, cmd)) { + nlmsg_free(ntf); + return; + } + + genlmsg_multicast_netns(&netdev_nl_family, dev_net(netdev), ntf, + 0, NETDEV_NLGRP_MGMT, GFP_KERNEL); +} + +int netdev_nl_dev_get_doit(struct sk_buff *skb, struct genl_info *info) +{ + struct net_device *netdev; + struct sk_buff *rsp; + u32 ifindex; + int err; + + if (GENL_REQ_ATTR_CHECK(info, NETDEV_A_DEV_IFINDEX)) + return -EINVAL; + + ifindex = nla_get_u32(info->attrs[NETDEV_A_DEV_IFINDEX]); + + rsp = genlmsg_new(GENLMSG_DEFAULT_SIZE, GFP_KERNEL); + if (!rsp) + return -ENOMEM; + + rtnl_lock(); + + netdev = __dev_get_by_index(genl_info_net(info), ifindex); + if (netdev) + err = netdev_nl_dev_fill(netdev, rsp, info->snd_portid, + info->snd_seq, 0, info->genlhdr->cmd); + else + err = -ENODEV; + + rtnl_unlock(); + + if (err) + goto err_free_msg; + + return genlmsg_reply(rsp, info); + +err_free_msg: + nlmsg_free(rsp); + return err; +} + +int netdev_nl_dev_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb) +{ + struct net *net = sock_net(skb->sk); + struct net_device *netdev; + int idx = 0, s_idx; + int h, s_h; + int err; + + s_h = cb->args[0]; + s_idx = cb->args[1]; + + rtnl_lock(); + + for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { + struct hlist_head *head; + + idx = 0; + head = &net->dev_index_head[h]; + hlist_for_each_entry(netdev, head, index_hlist) { + if (idx < s_idx) + goto cont; + err = netdev_nl_dev_fill(netdev, skb, + NETLINK_CB(cb->skb).portid, + cb->nlh->nlmsg_seq, 0, + NETDEV_CMD_DEV_GET); + if (err < 0) + break; +cont: + idx++; + } + } + + rtnl_unlock(); + + if (err != -EMSGSIZE) + return err; + + cb->args[1] = idx; + cb->args[0] = h; + cb->seq = net->dev_base_seq; + + return skb->len; +} + +static int netdev_genl_netdevice_event(struct notifier_block *nb, + unsigned long event, void *ptr) +{ + struct net_device *netdev = netdev_notifier_info_to_dev(ptr); + + switch (event) { + case NETDEV_REGISTER: + netdev_genl_dev_notify(netdev, NETDEV_CMD_DEV_ADD_NTF); + break; + case NETDEV_UNREGISTER: + netdev_genl_dev_notify(netdev, NETDEV_CMD_DEV_DEL_NTF); + break; + case NETDEV_XDP_FEAT_CHANGE: + netdev_genl_dev_notify(netdev, NETDEV_CMD_DEV_CHANGE_NTF); + break; + } + + return NOTIFY_OK; +} + +static struct notifier_block netdev_genl_nb = { + .notifier_call = netdev_genl_netdevice_event, +}; + +static int __init netdev_genl_init(void) +{ + int err; + + err = register_netdevice_notifier(&netdev_genl_nb); + if (err) + return err; + + err = genl_register_family(&netdev_nl_family); + if (err) + goto err_unreg_ntf; + + return 0; + +err_unreg_ntf: + unregister_netdevice_notifier(&netdev_genl_nb); + return err; +} + +subsys_initcall(netdev_genl_init); diff --git a/net/core/skbuff.c b/net/core/skbuff.c index bdb1e015e32b..13ea10cf8544 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -84,7 +84,7 @@ #include "dev.h" #include "sock_destructor.h" -struct kmem_cache *skbuff_head_cache __ro_after_init; +struct kmem_cache *skbuff_cache __ro_after_init; static struct kmem_cache *skbuff_fclone_cache __ro_after_init; #ifdef CONFIG_SKB_EXTENSIONS static struct kmem_cache *skbuff_ext_cache __ro_after_init; @@ -285,7 +285,7 @@ static struct sk_buff *napi_skb_cache_get(void) struct sk_buff *skb; if (unlikely(!nc->skb_count)) { - nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache, + nc->skb_count = kmem_cache_alloc_bulk(skbuff_cache, GFP_ATOMIC, NAPI_SKB_CACHE_BULK, nc->skb_cache); @@ -294,7 +294,7 @@ static struct sk_buff *napi_skb_cache_get(void) } skb = nc->skb_cache[--nc->skb_count]; - kasan_unpoison_object_data(skbuff_head_cache, skb); + kasan_unpoison_object_data(skbuff_cache, skb); return skb; } @@ -352,7 +352,7 @@ struct sk_buff *slab_build_skb(void *data) struct sk_buff *skb; unsigned int size; - skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC); + skb = kmem_cache_alloc(skbuff_cache, GFP_ATOMIC); if (unlikely(!skb)) return NULL; @@ -403,7 +403,7 @@ struct sk_buff *__build_skb(void *data, unsigned int frag_size) { struct sk_buff *skb; - skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC); + skb = kmem_cache_alloc(skbuff_cache, GFP_ATOMIC); if (unlikely(!skb)) return NULL; @@ -585,7 +585,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, u8 *data; cache = (flags & SKB_ALLOC_FCLONE) - ? skbuff_fclone_cache : skbuff_head_cache; + ? skbuff_fclone_cache : skbuff_cache; if (sk_memalloc_socks() && (flags & SKB_ALLOC_RX)) gfp_mask |= __GFP_MEMALLOC; @@ -921,7 +921,7 @@ static void kfree_skbmem(struct sk_buff *skb) switch (skb->fclone) { case SKB_FCLONE_UNAVAILABLE: - kmem_cache_free(skbuff_head_cache, skb); + kmem_cache_free(skbuff_cache, skb); return; case SKB_FCLONE_ORIG: @@ -1035,7 +1035,7 @@ static void kfree_skb_add_bulk(struct sk_buff *skb, sa->skb_array[sa->skb_count++] = skb; if (unlikely(sa->skb_count == KFREE_SKB_BULK_SIZE)) { - kmem_cache_free_bulk(skbuff_head_cache, KFREE_SKB_BULK_SIZE, + kmem_cache_free_bulk(skbuff_cache, KFREE_SKB_BULK_SIZE, sa->skb_array); sa->skb_count = 0; } @@ -1060,8 +1060,7 @@ kfree_skb_list_reason(struct sk_buff *segs, enum skb_drop_reason reason) } if (sa.skb_count) - kmem_cache_free_bulk(skbuff_head_cache, sa.skb_count, - sa.skb_array); + kmem_cache_free_bulk(skbuff_cache, sa.skb_count, sa.skb_array); } EXPORT_SYMBOL(kfree_skb_list_reason); @@ -1215,15 +1214,15 @@ static void napi_skb_cache_put(struct sk_buff *skb) struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache); u32 i; - kasan_poison_object_data(skbuff_head_cache, skb); + kasan_poison_object_data(skbuff_cache, skb); nc->skb_cache[nc->skb_count++] = skb; if (unlikely(nc->skb_count == NAPI_SKB_CACHE_SIZE)) { for (i = NAPI_SKB_CACHE_HALF; i < NAPI_SKB_CACHE_SIZE; i++) - kasan_unpoison_object_data(skbuff_head_cache, + kasan_unpoison_object_data(skbuff_cache, nc->skb_cache[i]); - kmem_cache_free_bulk(skbuff_head_cache, NAPI_SKB_CACHE_HALF, + kmem_cache_free_bulk(skbuff_cache, NAPI_SKB_CACHE_HALF, nc->skb_cache + NAPI_SKB_CACHE_HALF); nc->skb_count = NAPI_SKB_CACHE_HALF; } @@ -1807,7 +1806,7 @@ struct sk_buff *skb_clone(struct sk_buff *skb, gfp_t gfp_mask) if (skb_pfmemalloc(skb)) gfp_mask |= __GFP_MEMALLOC; - n = kmem_cache_alloc(skbuff_head_cache, gfp_mask); + n = kmem_cache_alloc(skbuff_cache, gfp_mask); if (!n) return NULL; @@ -4677,7 +4676,7 @@ static void skb_extensions_init(void) {} void __init skb_init(void) { - skbuff_head_cache = kmem_cache_create_usercopy("skbuff_head_cache", + skbuff_cache = kmem_cache_create_usercopy("skbuff_head_cache", sizeof(struct sk_buff), 0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, @@ -4690,10 +4689,16 @@ void __init skb_init(void) SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL); #ifdef HAVE_SKB_SMALL_HEAD_CACHE - skb_small_head_cache = kmem_cache_create("skbuff_small_head", + /* usercopy should only access first SKB_SMALL_HEAD_HEADROOM bytes. + * struct skb_shared_info is located at the end of skb->head, + * and should not be copied to/from user. + */ + skb_small_head_cache = kmem_cache_create_usercopy("skbuff_small_head", SKB_SMALL_HEAD_CACHE_SIZE, 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, + 0, + SKB_SMALL_HEAD_HEADROOM, NULL); #endif skb_extensions_init(); @@ -5550,7 +5555,7 @@ void kfree_skb_partial(struct sk_buff *skb, bool head_stolen) { if (head_stolen) { skb_release_head_state(skb); - kmem_cache_free(skbuff_head_cache, skb); + kmem_cache_free(skbuff_cache, skb); } else { __kfree_skb(skb); } diff --git a/net/core/sock.c b/net/core/sock.c index 208634b01df5..afbb02984d5f 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1531,6 +1531,8 @@ set_sndbuf: ret = -EINVAL; break; } + if ((u8)val == SOCK_TXREHASH_DEFAULT) + val = READ_ONCE(sock_net(sk)->core.sysctl_txrehash); /* Paired with READ_ONCE() in tcp_rtx_synack() */ WRITE_ONCE(sk->sk_txrehash, (u8)val); break; @@ -3454,7 +3456,6 @@ void sock_init_data_uid(struct socket *sock, struct sock *sk, kuid_t uid) sk->sk_pacing_rate = ~0UL; WRITE_ONCE(sk->sk_pacing_shift, 10); sk->sk_incoming_cpu = -1; - sk->sk_txrehash = SOCK_TXREHASH_DEFAULT; sk_rx_queue_clear(sk); /* diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index e7b98162c632..7130e6d9e263 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -16,6 +16,7 @@ #include <linux/vmalloc.h> #include <linux/init.h> #include <linux/slab.h> +#include <linux/sched/isolation.h> #include <net/ip.h> #include <net/sock.h> @@ -45,7 +46,59 @@ EXPORT_SYMBOL(sysctl_fb_tunnels_only_for_init_net); int sysctl_devconf_inherit_init_net __read_mostly; EXPORT_SYMBOL(sysctl_devconf_inherit_init_net); +#if IS_ENABLED(CONFIG_NET_FLOW_LIMIT) || IS_ENABLED(CONFIG_RPS) +static void dump_cpumask(void *buffer, size_t *lenp, loff_t *ppos, + struct cpumask *mask) +{ + char kbuf[128]; + int len; + + if (*ppos || !*lenp) { + *lenp = 0; + return; + } + + len = min(sizeof(kbuf) - 1, *lenp); + len = scnprintf(kbuf, len, "%*pb", cpumask_pr_args(mask)); + if (!len) { + *lenp = 0; + return; + } + + if (len < *lenp) + kbuf[len++] = '\n'; + memcpy(buffer, kbuf, len); + *lenp = len; + *ppos += len; +} +#endif + #ifdef CONFIG_RPS +struct cpumask rps_default_mask; + +static int rps_default_mask_sysctl(struct ctl_table *table, int write, + void *buffer, size_t *lenp, loff_t *ppos) +{ + int err = 0; + + rtnl_lock(); + if (write) { + err = cpumask_parse(buffer, &rps_default_mask); + if (err) + goto done; + + err = rps_cpumask_housekeeping(&rps_default_mask); + if (err) + goto done; + } else { + dump_cpumask(buffer, lenp, ppos, &rps_default_mask); + } + +done: + rtnl_unlock(); + return err; +} + static int rps_sock_flow_sysctl(struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { @@ -155,13 +208,6 @@ static int flow_limit_cpu_sysctl(struct ctl_table *table, int write, write_unlock: mutex_unlock(&flow_limit_update_mutex); } else { - char kbuf[128]; - - if (*ppos || !*lenp) { - *lenp = 0; - goto done; - } - cpumask_clear(mask); rcu_read_lock(); for_each_possible_cpu(i) { @@ -171,17 +217,7 @@ write_unlock: } rcu_read_unlock(); - len = min(sizeof(kbuf) - 1, *lenp); - len = scnprintf(kbuf, len, "%*pb", cpumask_pr_args(mask)); - if (!len) { - *lenp = 0; - goto done; - } - if (len < *lenp) - kbuf[len++] = '\n'; - memcpy(buffer, kbuf, len); - *lenp = len; - *ppos += len; + dump_cpumask(buffer, lenp, ppos, mask); } done: @@ -472,6 +508,11 @@ static struct ctl_table net_core_table[] = { .mode = 0644, .proc_handler = rps_sock_flow_sysctl }, + { + .procname = "rps_default_mask", + .mode = 0644, + .proc_handler = rps_default_mask_sysctl + }, #endif #ifdef CONFIG_NET_FLOW_LIMIT { @@ -675,6 +716,10 @@ static __net_initdata struct pernet_operations sysctl_core_ops = { static __init int sysctl_core_init(void) { +#if IS_ENABLED(CONFIG_RPS) + cpumask_copy(&rps_default_mask, cpu_none_mask); +#endif + register_net_sysctl(&init_net, "net/core", net_core_table); return register_pernet_subsys(&sysctl_core_ops); } diff --git a/net/core/xdp.c b/net/core/xdp.c index a5a7ecf6391c..8c92fc553317 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -4,6 +4,7 @@ * Copyright (c) 2017 Jesper Dangaard Brouer, Red Hat Inc. */ #include <linux/bpf.h> +#include <linux/btf.h> #include <linux/btf_ids.h> #include <linux/filter.h> #include <linux/types.h> @@ -603,8 +604,7 @@ EXPORT_SYMBOL_GPL(xdp_warn); int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp) { - n_skb = kmem_cache_alloc_bulk(skbuff_head_cache, gfp, - n_skb, skbs); + n_skb = kmem_cache_alloc_bulk(skbuff_cache, gfp, n_skb, skbs); if (unlikely(!n_skb)) return -ENOMEM; @@ -673,7 +673,7 @@ struct sk_buff *xdp_build_skb_from_frame(struct xdp_frame *xdpf, { struct sk_buff *skb; - skb = kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC); + skb = kmem_cache_alloc(skbuff_cache, GFP_ATOMIC); if (unlikely(!skb)) return NULL; @@ -722,7 +722,7 @@ __diag_ignore_all("-Wmissing-prototypes", * * Returns 0 on success or ``-errno`` on error. */ -int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) +__bpf_kfunc int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) { return -EOPNOTSUPP; } @@ -734,7 +734,7 @@ int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx, u64 *timestamp) * * Returns 0 on success or ``-errno`` on error. */ -int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash) +__bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash) { return -EOPNOTSUPP; } @@ -773,3 +773,21 @@ static int __init xdp_metadata_init(void) return register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &xdp_metadata_kfunc_set); } late_initcall(xdp_metadata_init); + +void xdp_features_set_redirect_target(struct net_device *dev, bool support_sg) +{ + dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT; + if (support_sg) + dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT_SG; + + call_netdevice_notifiers(NETDEV_XDP_FEAT_CHANGE, dev); +} +EXPORT_SYMBOL_GPL(xdp_features_set_redirect_target); + +void xdp_features_clear_redirect_target(struct net_device *dev) +{ + dev->xdp_features &= ~(NETDEV_XDP_ACT_NDO_XMIT | + NETDEV_XDP_ACT_NDO_XMIT_SG); + call_netdevice_notifiers(NETDEV_XDP_FEAT_CHANGE, dev); +} +EXPORT_SYMBOL_GPL(xdp_features_clear_redirect_target); diff --git a/net/devlink/core.c b/net/devlink/core.c index aeffd1b8206d..a4f47dafb864 100644 --- a/net/devlink/core.c +++ b/net/devlink/core.c @@ -205,7 +205,7 @@ struct devlink *devlink_alloc_ns(const struct devlink_ops *ops, goto err_xa_alloc; devlink->netdevice_nb.notifier_call = devlink_port_netdevice_event; - ret = register_netdevice_notifier_net(net, &devlink->netdevice_nb); + ret = register_netdevice_notifier(&devlink->netdevice_nb); if (ret) goto err_register_netdevice_notifier; @@ -266,8 +266,7 @@ void devlink_free(struct devlink *devlink) xa_destroy(&devlink->snapshot_ids); xa_destroy(&devlink->ports); - WARN_ON_ONCE(unregister_netdevice_notifier_net(devlink_net(devlink), - &devlink->netdevice_nb)); + WARN_ON_ONCE(unregister_netdevice_notifier(&devlink->netdevice_nb)); xa_erase(&devlinks, devlink->index); diff --git a/net/devlink/leftover.c b/net/devlink/leftover.c index 9d6373603340..f05ab093d231 100644 --- a/net/devlink/leftover.c +++ b/net/devlink/leftover.c @@ -8430,6 +8430,8 @@ int devlink_port_netdevice_event(struct notifier_block *nb, break; case NETDEV_REGISTER: case NETDEV_CHANGENAME: + if (devlink_net(devlink) != dev_net(netdev)) + return NOTIFY_OK; /* Set the netdev on top of previously set type. Note this * event happens also during net namespace change so here * we take into account netdev pointer appearing in this @@ -8439,6 +8441,8 @@ int devlink_port_netdevice_event(struct notifier_block *nb, netdev); break; case NETDEV_UNREGISTER: + if (devlink_net(devlink) != dev_net(netdev)) + return NOTIFY_OK; /* Clear netdev pointer, but not the type. This event happens * also during net namespace change so we need to clear * pointer to netdev that is going to another net namespace. diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c index 2f992a323b95..2c778b013cb0 100644 --- a/net/ipv4/af_inet.c +++ b/net/ipv4/af_inet.c @@ -347,6 +347,7 @@ lookup_protocol: sk->sk_destruct = inet_sock_destruct; sk->sk_protocol = protocol; sk->sk_backlog_rcv = sk->sk_prot->backlog_rcv; + sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash); inet->uc_ttl = -1; inet->mc_loop = 1; diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c index 6ed7e65de494..7d206a10ad14 100644 --- a/net/ipv4/inet_connection_sock.c +++ b/net/ipv4/inet_connection_sock.c @@ -1246,9 +1246,6 @@ int inet_csk_listen_start(struct sock *sk) sk->sk_ack_backlog = 0; inet_csk_delack_init(sk); - if (sk->sk_txrehash == SOCK_TXREHASH_DEFAULT) - sk->sk_txrehash = READ_ONCE(sock_net(sk)->core.sysctl_txrehash); - /* There is race window here: we announce ourselves listening, * but this transition is still not validated by get_port(). * It is OK, because this socket enters to hash table only diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c index d2c470524e58..146792cd26fe 100644 --- a/net/ipv4/tcp_bbr.c +++ b/net/ipv4/tcp_bbr.c @@ -295,7 +295,7 @@ static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain) } /* override sysctl_tcp_min_tso_segs */ -static u32 bbr_min_tso_segs(struct sock *sk) +__bpf_kfunc static u32 bbr_min_tso_segs(struct sock *sk) { return sk->sk_pacing_rate < (bbr_min_tso_rate >> 3) ? 1 : 2; } @@ -328,7 +328,7 @@ static void bbr_save_cwnd(struct sock *sk) bbr->prior_cwnd = max(bbr->prior_cwnd, tcp_snd_cwnd(tp)); } -static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event) +__bpf_kfunc static void bbr_cwnd_event(struct sock *sk, enum tcp_ca_event event) { struct tcp_sock *tp = tcp_sk(sk); struct bbr *bbr = inet_csk_ca(sk); @@ -1023,7 +1023,7 @@ static void bbr_update_model(struct sock *sk, const struct rate_sample *rs) bbr_update_gains(sk); } -static void bbr_main(struct sock *sk, const struct rate_sample *rs) +__bpf_kfunc static void bbr_main(struct sock *sk, const struct rate_sample *rs) { struct bbr *bbr = inet_csk_ca(sk); u32 bw; @@ -1035,7 +1035,7 @@ static void bbr_main(struct sock *sk, const struct rate_sample *rs) bbr_set_cwnd(sk, rs, rs->acked_sacked, bw, bbr->cwnd_gain); } -static void bbr_init(struct sock *sk) +__bpf_kfunc static void bbr_init(struct sock *sk) { struct tcp_sock *tp = tcp_sk(sk); struct bbr *bbr = inet_csk_ca(sk); @@ -1077,7 +1077,7 @@ static void bbr_init(struct sock *sk) cmpxchg(&sk->sk_pacing_status, SK_PACING_NONE, SK_PACING_NEEDED); } -static u32 bbr_sndbuf_expand(struct sock *sk) +__bpf_kfunc static u32 bbr_sndbuf_expand(struct sock *sk) { /* Provision 3 * cwnd since BBR may slow-start even during recovery. */ return 3; @@ -1086,7 +1086,7 @@ static u32 bbr_sndbuf_expand(struct sock *sk) /* In theory BBR does not need to undo the cwnd since it does not * always reduce cwnd on losses (see bbr_main()). Keep it for now. */ -static u32 bbr_undo_cwnd(struct sock *sk) +__bpf_kfunc static u32 bbr_undo_cwnd(struct sock *sk) { struct bbr *bbr = inet_csk_ca(sk); @@ -1097,7 +1097,7 @@ static u32 bbr_undo_cwnd(struct sock *sk) } /* Entering loss recovery, so save cwnd for when we exit or undo recovery. */ -static u32 bbr_ssthresh(struct sock *sk) +__bpf_kfunc static u32 bbr_ssthresh(struct sock *sk) { bbr_save_cwnd(sk); return tcp_sk(sk)->snd_ssthresh; @@ -1125,7 +1125,7 @@ static size_t bbr_get_info(struct sock *sk, u32 ext, int *attr, return 0; } -static void bbr_set_state(struct sock *sk, u8 new_state) +__bpf_kfunc static void bbr_set_state(struct sock *sk, u8 new_state) { struct bbr *bbr = inet_csk_ca(sk); diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c index d3cae40749e8..db8b4b488c31 100644 --- a/net/ipv4/tcp_cong.c +++ b/net/ipv4/tcp_cong.c @@ -403,7 +403,7 @@ int tcp_set_congestion_control(struct sock *sk, const char *name, bool load, * ABC caps N to 2. Slow start exits when cwnd grows over ssthresh and * returns the leftover acks to adjust cwnd in congestion avoidance mode. */ -u32 tcp_slow_start(struct tcp_sock *tp, u32 acked) +__bpf_kfunc u32 tcp_slow_start(struct tcp_sock *tp, u32 acked) { u32 cwnd = min(tcp_snd_cwnd(tp) + acked, tp->snd_ssthresh); @@ -417,7 +417,7 @@ EXPORT_SYMBOL_GPL(tcp_slow_start); /* In theory this is tp->snd_cwnd += 1 / tp->snd_cwnd (or alternative w), * for every packet that was ACKed. */ -void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked) +__bpf_kfunc void tcp_cong_avoid_ai(struct tcp_sock *tp, u32 w, u32 acked) { /* If credits accumulated at a higher w, apply them gently now. */ if (tp->snd_cwnd_cnt >= w) { @@ -443,7 +443,7 @@ EXPORT_SYMBOL_GPL(tcp_cong_avoid_ai); /* This is Jacobson's slow start and congestion avoidance. * SIGCOMM '88, p. 328. */ -void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked) +__bpf_kfunc void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked) { struct tcp_sock *tp = tcp_sk(sk); @@ -462,7 +462,7 @@ void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked) EXPORT_SYMBOL_GPL(tcp_reno_cong_avoid); /* Slow start threshold is half the congestion window (min 2) */ -u32 tcp_reno_ssthresh(struct sock *sk) +__bpf_kfunc u32 tcp_reno_ssthresh(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); @@ -470,7 +470,7 @@ u32 tcp_reno_ssthresh(struct sock *sk) } EXPORT_SYMBOL_GPL(tcp_reno_ssthresh); -u32 tcp_reno_undo_cwnd(struct sock *sk) +__bpf_kfunc u32 tcp_reno_undo_cwnd(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 768c10c1f649..0fd78ecb67e7 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -126,7 +126,7 @@ static inline void bictcp_hystart_reset(struct sock *sk) ca->sample_cnt = 0; } -static void cubictcp_init(struct sock *sk) +__bpf_kfunc static void cubictcp_init(struct sock *sk) { struct bictcp *ca = inet_csk_ca(sk); @@ -139,7 +139,7 @@ static void cubictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } -static void cubictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) +__bpf_kfunc static void cubictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) { if (event == CA_EVENT_TX_START) { struct bictcp *ca = inet_csk_ca(sk); @@ -321,7 +321,7 @@ tcp_friendliness: ca->cnt = max(ca->cnt, 2U); } -static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked) +__bpf_kfunc static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked) { struct tcp_sock *tp = tcp_sk(sk); struct bictcp *ca = inet_csk_ca(sk); @@ -338,7 +338,7 @@ static void cubictcp_cong_avoid(struct sock *sk, u32 ack, u32 acked) tcp_cong_avoid_ai(tp, ca->cnt, acked); } -static u32 cubictcp_recalc_ssthresh(struct sock *sk) +__bpf_kfunc static u32 cubictcp_recalc_ssthresh(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); struct bictcp *ca = inet_csk_ca(sk); @@ -355,7 +355,7 @@ static u32 cubictcp_recalc_ssthresh(struct sock *sk) return max((tcp_snd_cwnd(tp) * beta) / BICTCP_BETA_SCALE, 2U); } -static void cubictcp_state(struct sock *sk, u8 new_state) +__bpf_kfunc static void cubictcp_state(struct sock *sk, u8 new_state) { if (new_state == TCP_CA_Loss) { bictcp_reset(inet_csk_ca(sk)); @@ -445,7 +445,7 @@ static void hystart_update(struct sock *sk, u32 delay) } } -static void cubictcp_acked(struct sock *sk, const struct ack_sample *sample) +__bpf_kfunc static void cubictcp_acked(struct sock *sk, const struct ack_sample *sample) { const struct tcp_sock *tp = tcp_sk(sk); struct bictcp *ca = inet_csk_ca(sk); diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c index e0a2ca7456ff..bb23bb5b387a 100644 --- a/net/ipv4/tcp_dctcp.c +++ b/net/ipv4/tcp_dctcp.c @@ -75,7 +75,7 @@ static void dctcp_reset(const struct tcp_sock *tp, struct dctcp *ca) ca->old_delivered_ce = tp->delivered_ce; } -static void dctcp_init(struct sock *sk) +__bpf_kfunc static void dctcp_init(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); @@ -104,7 +104,7 @@ static void dctcp_init(struct sock *sk) INET_ECN_dontxmit(sk); } -static u32 dctcp_ssthresh(struct sock *sk) +__bpf_kfunc static u32 dctcp_ssthresh(struct sock *sk) { struct dctcp *ca = inet_csk_ca(sk); struct tcp_sock *tp = tcp_sk(sk); @@ -113,7 +113,7 @@ static u32 dctcp_ssthresh(struct sock *sk) return max(tcp_snd_cwnd(tp) - ((tcp_snd_cwnd(tp) * ca->dctcp_alpha) >> 11U), 2U); } -static void dctcp_update_alpha(struct sock *sk, u32 flags) +__bpf_kfunc static void dctcp_update_alpha(struct sock *sk, u32 flags) { const struct tcp_sock *tp = tcp_sk(sk); struct dctcp *ca = inet_csk_ca(sk); @@ -169,7 +169,7 @@ static void dctcp_react_to_loss(struct sock *sk) tp->snd_ssthresh = max(tcp_snd_cwnd(tp) >> 1U, 2U); } -static void dctcp_state(struct sock *sk, u8 new_state) +__bpf_kfunc static void dctcp_state(struct sock *sk, u8 new_state) { if (new_state == TCP_CA_Recovery && new_state != inet_csk(sk)->icsk_ca_state) @@ -179,7 +179,7 @@ static void dctcp_state(struct sock *sk, u8 new_state) */ } -static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev) +__bpf_kfunc static void dctcp_cwnd_event(struct sock *sk, enum tcp_ca_event ev) { struct dctcp *ca = inet_csk_ca(sk); @@ -229,7 +229,7 @@ static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr, return 0; } -static u32 dctcp_cwnd_undo(struct sock *sk) +__bpf_kfunc static u32 dctcp_cwnd_undo(struct sock *sk) { const struct dctcp *ca = inet_csk_ca(sk); struct tcp_sock *tp = tcp_sk(sk); diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c index fee9163382c2..847934763868 100644 --- a/net/ipv6/af_inet6.c +++ b/net/ipv6/af_inet6.c @@ -222,6 +222,7 @@ lookup_protocol: np->pmtudisc = IPV6_PMTUDISC_WANT; np->repflow = net->ipv6.sysctl.flowlabel_reflect & FLOWLABEL_REFLECT_ESTABLISHED; sk->sk_ipv6only = net->ipv6.sysctl.bindv6only; + sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash); /* Init the ipv4 part of the socket since we can have sockets * using v6 API for ipv4. diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c index db07cc5b4fcb..56628b52d100 100644 --- a/net/mptcp/pm_netlink.c +++ b/net/mptcp/pm_netlink.c @@ -1002,8 +1002,8 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, { int addrlen = sizeof(struct sockaddr_in); struct sockaddr_storage addr; - struct mptcp_sock *msk; struct socket *ssock; + struct sock *newsk; int backlog = 1024; int err; @@ -1012,11 +1012,13 @@ static int mptcp_pm_nl_create_listen_socket(struct sock *sk, if (err) return err; - msk = mptcp_sk(entry->lsk->sk); - if (!msk) + newsk = entry->lsk->sk; + if (!newsk) return -EINVAL; - ssock = __mptcp_nmpc_socket(msk); + lock_sock(newsk); + ssock = __mptcp_nmpc_socket(mptcp_sk(newsk)); + release_sock(newsk); if (!ssock) return -EINVAL; diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 1ff2efbab29e..c9817aa0f413 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2902,6 +2902,7 @@ bool __mptcp_close(struct sock *sk, long timeout) struct mptcp_subflow_context *subflow; struct mptcp_sock *msk = mptcp_sk(sk); bool do_cancel_work = false; + int subflows_alive = 0; sk->sk_shutdown = SHUTDOWN_MASK; @@ -2928,6 +2929,8 @@ cleanup: struct sock *ssk = mptcp_subflow_tcp_sock(subflow); bool slow = lock_sock_fast_nested(ssk); + subflows_alive += ssk->sk_state != TCP_CLOSE; + /* since the close timeout takes precedence on the fail one, * cancel the latter */ @@ -2943,6 +2946,12 @@ cleanup: } sock_orphan(sk); + /* all the subflows are closed, only timeout can change the msk + * state, let's not keep resources busy for no reasons + */ + if (subflows_alive == 0) + inet_sk_state_store(sk, TCP_CLOSE); + sock_hold(sk); pr_debug("msk=%p state=%d", sk, sk->sk_state); if (msk->token) diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c index 9986681aaf40..8a9656248b0f 100644 --- a/net/mptcp/sockopt.c +++ b/net/mptcp/sockopt.c @@ -760,14 +760,21 @@ static int mptcp_setsockopt_v4(struct mptcp_sock *msk, int optname, static int mptcp_setsockopt_first_sf_only(struct mptcp_sock *msk, int level, int optname, sockptr_t optval, unsigned int optlen) { + struct sock *sk = (struct sock *)msk; struct socket *sock; + int ret = -EINVAL; /* Limit to first subflow, before the connection establishment */ + lock_sock(sk); sock = __mptcp_nmpc_socket(msk); if (!sock) - return -EINVAL; + goto unlock; - return tcp_setsockopt(sock->sk, level, optname, optval, optlen); + ret = tcp_setsockopt(sock->sk, level, optname, optval, optlen); + +unlock: + release_sock(sk); + return ret; } static int mptcp_setsockopt_sol_tcp(struct mptcp_sock *msk, int optname, diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index beaec843f5ca..4ae1a7304cf0 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1400,6 +1400,7 @@ void __mptcp_error_report(struct sock *sk) mptcp_for_each_subflow(msk, subflow) { struct sock *ssk = mptcp_subflow_tcp_sock(subflow); int err = sock_error(ssk); + int ssk_state; if (!err) continue; @@ -1410,7 +1411,14 @@ void __mptcp_error_report(struct sock *sk) if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(msk)) continue; - inet_sk_state_store(sk, inet_sk_state_load(ssk)); + /* We need to propagate only transition to CLOSE state. + * Orphaned socket will see such state change via + * subflow_sched_work_if_closed() and that path will properly + * destroy the msk as needed. + */ + ssk_state = inet_sk_state_load(ssk); + if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD)) + inet_sk_state_store(sk, ssk_state); sk->sk_err = -err; /* This barrier is coupled with smp_rmb() in mptcp_poll() */ @@ -1682,7 +1690,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family, if (err) return err; - lock_sock(sf->sk); + lock_sock_nested(sf->sk, SINGLE_DEPTH_NESTING); /* the newly created socket has to be in the same cgroup as its parent */ mptcp_attach_cgroup(sk, sf->sk); diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig index f71b41c7ce2f..4d6737160857 100644 --- a/net/netfilter/Kconfig +++ b/net/netfilter/Kconfig @@ -189,6 +189,9 @@ config NF_CONNTRACK_LABELS to connection tracking entries. It can be used with xtables connlabel match and the nftables ct expression. +config NF_CONNTRACK_OVS + bool + config NF_CT_PROTO_DCCP bool 'DCCP protocol connection tracking support' depends on NETFILTER_ADVANCED diff --git a/net/netfilter/Makefile b/net/netfilter/Makefile index ba2a6b5e93d9..5ffef1cd6143 100644 --- a/net/netfilter/Makefile +++ b/net/netfilter/Makefile @@ -11,6 +11,7 @@ nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMEOUT) += nf_conntrack_timeout.o nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMESTAMP) += nf_conntrack_timestamp.o nf_conntrack-$(CONFIG_NF_CONNTRACK_EVENTS) += nf_conntrack_ecache.o nf_conntrack-$(CONFIG_NF_CONNTRACK_LABELS) += nf_conntrack_labels.o +nf_conntrack-$(CONFIG_NF_CONNTRACK_OVS) += nf_conntrack_ovs.o nf_conntrack-$(CONFIG_NF_CT_PROTO_DCCP) += nf_conntrack_proto_dccp.o nf_conntrack-$(CONFIG_NF_CT_PROTO_SCTP) += nf_conntrack_proto_sctp.o nf_conntrack-$(CONFIG_NF_CT_PROTO_GRE) += nf_conntrack_proto_gre.o diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c index 24002bc61e07..34913521c385 100644 --- a/net/netfilter/nf_conntrack_bpf.c +++ b/net/netfilter/nf_conntrack_bpf.c @@ -249,7 +249,7 @@ __diag_ignore_all("-Wmissing-prototypes", * @opts__sz - Length of the bpf_ct_opts structure * Must be NF_BPF_CT_OPTS_SZ (12) */ -struct nf_conn___init * +__bpf_kfunc struct nf_conn___init * bpf_xdp_ct_alloc(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple, u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz) { @@ -283,7 +283,7 @@ bpf_xdp_ct_alloc(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple, * @opts__sz - Length of the bpf_ct_opts structure * Must be NF_BPF_CT_OPTS_SZ (12) */ -struct nf_conn * +__bpf_kfunc struct nf_conn * bpf_xdp_ct_lookup(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple, u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz) { @@ -316,7 +316,7 @@ bpf_xdp_ct_lookup(struct xdp_md *xdp_ctx, struct bpf_sock_tuple *bpf_tuple, * @opts__sz - Length of the bpf_ct_opts structure * Must be NF_BPF_CT_OPTS_SZ (12) */ -struct nf_conn___init * +__bpf_kfunc struct nf_conn___init * bpf_skb_ct_alloc(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple, u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz) { @@ -351,7 +351,7 @@ bpf_skb_ct_alloc(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple, * @opts__sz - Length of the bpf_ct_opts structure * Must be NF_BPF_CT_OPTS_SZ (12) */ -struct nf_conn * +__bpf_kfunc struct nf_conn * bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple, u32 tuple__sz, struct bpf_ct_opts *opts, u32 opts__sz) { @@ -376,7 +376,7 @@ bpf_skb_ct_lookup(struct __sk_buff *skb_ctx, struct bpf_sock_tuple *bpf_tuple, * @nfct - Pointer to referenced nf_conn___init object, obtained * using bpf_xdp_ct_alloc or bpf_skb_ct_alloc. */ -struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i) +__bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i) { struct nf_conn *nfct = (struct nf_conn *)nfct_i; int err; @@ -400,7 +400,7 @@ struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i) * @nf_conn - Pointer to referenced nf_conn object, obtained using * bpf_xdp_ct_lookup or bpf_skb_ct_lookup. */ -void bpf_ct_release(struct nf_conn *nfct) +__bpf_kfunc void bpf_ct_release(struct nf_conn *nfct) { if (!nfct) return; @@ -417,7 +417,7 @@ void bpf_ct_release(struct nf_conn *nfct) * bpf_xdp_ct_alloc or bpf_skb_ct_alloc. * @timeout - Timeout in msecs. */ -void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout) +__bpf_kfunc void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout) { __nf_ct_set_timeout((struct nf_conn *)nfct, msecs_to_jiffies(timeout)); } @@ -432,7 +432,7 @@ void bpf_ct_set_timeout(struct nf_conn___init *nfct, u32 timeout) * bpf_ct_insert_entry, bpf_xdp_ct_lookup, or bpf_skb_ct_lookup. * @timeout - New timeout in msecs. */ -int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout) +__bpf_kfunc int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout) { return __nf_ct_change_timeout(nfct, msecs_to_jiffies(timeout)); } @@ -447,7 +447,7 @@ int bpf_ct_change_timeout(struct nf_conn *nfct, u32 timeout) * bpf_xdp_ct_alloc or bpf_skb_ct_alloc. * @status - New status value. */ -int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status) +__bpf_kfunc int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status) { return nf_ct_change_status_common((struct nf_conn *)nfct, status); } @@ -462,7 +462,7 @@ int bpf_ct_set_status(const struct nf_conn___init *nfct, u32 status) * bpf_ct_insert_entry, bpf_xdp_ct_lookup or bpf_skb_ct_lookup. * @status - New status value. */ -int bpf_ct_change_status(struct nf_conn *nfct, u32 status) +__bpf_kfunc int bpf_ct_change_status(struct nf_conn *nfct, u32 status) { return nf_ct_change_status_common(nfct, status); } diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c index 48ea6d0264b5..0c4db2f2ac43 100644 --- a/net/netfilter/nf_conntrack_helper.c +++ b/net/netfilter/nf_conntrack_helper.c @@ -242,104 +242,6 @@ int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl, } EXPORT_SYMBOL_GPL(__nf_ct_try_assign_helper); -/* 'skb' should already be pulled to nh_ofs. */ -int nf_ct_helper(struct sk_buff *skb, struct nf_conn *ct, - enum ip_conntrack_info ctinfo, u16 proto) -{ - const struct nf_conntrack_helper *helper; - const struct nf_conn_help *help; - unsigned int protoff; - int err; - - if (ctinfo == IP_CT_RELATED_REPLY) - return NF_ACCEPT; - - help = nfct_help(ct); - if (!help) - return NF_ACCEPT; - - helper = rcu_dereference(help->helper); - if (!helper) - return NF_ACCEPT; - - if (helper->tuple.src.l3num != NFPROTO_UNSPEC && - helper->tuple.src.l3num != proto) - return NF_ACCEPT; - - switch (proto) { - case NFPROTO_IPV4: - protoff = ip_hdrlen(skb); - proto = ip_hdr(skb)->protocol; - break; - case NFPROTO_IPV6: { - u8 nexthdr = ipv6_hdr(skb)->nexthdr; - __be16 frag_off; - int ofs; - - ofs = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &nexthdr, - &frag_off); - if (ofs < 0 || (frag_off & htons(~0x7)) != 0) { - pr_debug("proto header not found\n"); - return NF_ACCEPT; - } - protoff = ofs; - proto = nexthdr; - break; - } - default: - WARN_ONCE(1, "helper invoked on non-IP family!"); - return NF_DROP; - } - - if (helper->tuple.dst.protonum != proto) - return NF_ACCEPT; - - err = helper->help(skb, protoff, ct, ctinfo); - if (err != NF_ACCEPT) - return err; - - /* Adjust seqs after helper. This is needed due to some helpers (e.g., - * FTP with NAT) adusting the TCP payload size when mangling IP - * addresses and/or port numbers in the text-based control connection. - */ - if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) && - !nf_ct_seq_adjust(skb, ct, ctinfo, protoff)) - return NF_DROP; - return NF_ACCEPT; -} -EXPORT_SYMBOL_GPL(nf_ct_helper); - -int nf_ct_add_helper(struct nf_conn *ct, const char *name, u8 family, - u8 proto, bool nat, struct nf_conntrack_helper **hp) -{ - struct nf_conntrack_helper *helper; - struct nf_conn_help *help; - int ret = 0; - - helper = nf_conntrack_helper_try_module_get(name, family, proto); - if (!helper) - return -EINVAL; - - help = nf_ct_helper_ext_add(ct, GFP_KERNEL); - if (!help) { - nf_conntrack_helper_put(helper); - return -ENOMEM; - } -#if IS_ENABLED(CONFIG_NF_NAT) - if (nat) { - ret = nf_nat_helper_try_module_get(name, family, proto); - if (ret) { - nf_conntrack_helper_put(helper); - return ret; - } - } -#endif - rcu_assign_pointer(help->helper, helper); - *hp = helper; - return ret; -} -EXPORT_SYMBOL_GPL(nf_ct_add_helper); - /* appropriate ct lock protecting must be taken by caller */ static int unhelp(struct nf_conn *ct, void *me) { diff --git a/net/netfilter/nf_conntrack_ovs.c b/net/netfilter/nf_conntrack_ovs.c new file mode 100644 index 000000000000..52b776bdf526 --- /dev/null +++ b/net/netfilter/nf_conntrack_ovs.c @@ -0,0 +1,178 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Support ct functions for openvswitch and used by OVS and TC conntrack. */ + +#include <net/netfilter/nf_conntrack_helper.h> +#include <net/netfilter/nf_conntrack_seqadj.h> +#include <net/netfilter/ipv6/nf_defrag_ipv6.h> +#include <net/ipv6_frag.h> +#include <net/ip.h> + +/* 'skb' should already be pulled to nh_ofs. */ +int nf_ct_helper(struct sk_buff *skb, struct nf_conn *ct, + enum ip_conntrack_info ctinfo, u16 proto) +{ + const struct nf_conntrack_helper *helper; + const struct nf_conn_help *help; + unsigned int protoff; + int err; + + if (ctinfo == IP_CT_RELATED_REPLY) + return NF_ACCEPT; + + help = nfct_help(ct); + if (!help) + return NF_ACCEPT; + + helper = rcu_dereference(help->helper); + if (!helper) + return NF_ACCEPT; + + if (helper->tuple.src.l3num != NFPROTO_UNSPEC && + helper->tuple.src.l3num != proto) + return NF_ACCEPT; + + switch (proto) { + case NFPROTO_IPV4: + protoff = ip_hdrlen(skb); + proto = ip_hdr(skb)->protocol; + break; + case NFPROTO_IPV6: { + u8 nexthdr = ipv6_hdr(skb)->nexthdr; + __be16 frag_off; + int ofs; + + ofs = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &nexthdr, + &frag_off); + if (ofs < 0 || (frag_off & htons(~0x7)) != 0) { + pr_debug("proto header not found\n"); + return NF_ACCEPT; + } + protoff = ofs; + proto = nexthdr; + break; + } + default: + WARN_ONCE(1, "helper invoked on non-IP family!"); + return NF_DROP; + } + + if (helper->tuple.dst.protonum != proto) + return NF_ACCEPT; + + err = helper->help(skb, protoff, ct, ctinfo); + if (err != NF_ACCEPT) + return err; + + /* Adjust seqs after helper. This is needed due to some helpers (e.g., + * FTP with NAT) adusting the TCP payload size when mangling IP + * addresses and/or port numbers in the text-based control connection. + */ + if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) && + !nf_ct_seq_adjust(skb, ct, ctinfo, protoff)) + return NF_DROP; + return NF_ACCEPT; +} +EXPORT_SYMBOL_GPL(nf_ct_helper); + +int nf_ct_add_helper(struct nf_conn *ct, const char *name, u8 family, + u8 proto, bool nat, struct nf_conntrack_helper **hp) +{ + struct nf_conntrack_helper *helper; + struct nf_conn_help *help; + int ret = 0; + + helper = nf_conntrack_helper_try_module_get(name, family, proto); + if (!helper) + return -EINVAL; + + help = nf_ct_helper_ext_add(ct, GFP_KERNEL); + if (!help) { + nf_conntrack_helper_put(helper); + return -ENOMEM; + } +#if IS_ENABLED(CONFIG_NF_NAT) + if (nat) { + ret = nf_nat_helper_try_module_get(name, family, proto); + if (ret) { + nf_conntrack_helper_put(helper); + return ret; + } + } +#endif + rcu_assign_pointer(help->helper, helper); + *hp = helper; + return ret; +} +EXPORT_SYMBOL_GPL(nf_ct_add_helper); + +/* Trim the skb to the length specified by the IP/IPv6 header, + * removing any trailing lower-layer padding. This prepares the skb + * for higher-layer processing that assumes skb->len excludes padding + * (such as nf_ip_checksum). The caller needs to pull the skb to the + * network header, and ensure ip_hdr/ipv6_hdr points to valid data. + */ +int nf_ct_skb_network_trim(struct sk_buff *skb, int family) +{ + unsigned int len; + + switch (family) { + case NFPROTO_IPV4: + len = skb_ip_totlen(skb); + break; + case NFPROTO_IPV6: + len = sizeof(struct ipv6hdr) + + ntohs(ipv6_hdr(skb)->payload_len); + break; + default: + len = skb->len; + } + + return pskb_trim_rcsum(skb, len); +} +EXPORT_SYMBOL_GPL(nf_ct_skb_network_trim); + +/* Returns 0 on success, -EINPROGRESS if 'skb' is stolen, or other nonzero + * value if 'skb' is freed. + */ +int nf_ct_handle_fragments(struct net *net, struct sk_buff *skb, + u16 zone, u8 family, u8 *proto, u16 *mru) +{ + int err; + + if (family == NFPROTO_IPV4) { + enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; + + memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); + local_bh_disable(); + err = ip_defrag(net, skb, user); + local_bh_enable(); + if (err) + return err; + + *mru = IPCB(skb)->frag_max_size; +#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) + } else if (family == NFPROTO_IPV6) { + enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone; + + memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm)); + err = nf_ct_frag6_gather(net, skb, user); + if (err) { + if (err != -EINPROGRESS) + kfree_skb(skb); + return err; + } + + *proto = ipv6_hdr(skb)->nexthdr; + *mru = IP6CB(skb)->frag_max_size; +#endif + } else { + kfree_skb(skb); + return -EPFNOSUPPORT; + } + + skb_clear_hash(skb); + skb->ignore_df = 1; + + return 0; +} +EXPORT_SYMBOL_GPL(nf_ct_handle_fragments); diff --git a/net/netfilter/nf_nat_bpf.c b/net/netfilter/nf_nat_bpf.c index 0fa5a0bbb0ff..141ee7783223 100644 --- a/net/netfilter/nf_nat_bpf.c +++ b/net/netfilter/nf_nat_bpf.c @@ -30,9 +30,9 @@ __diag_ignore_all("-Wmissing-prototypes", * interpreted as select a random port. * @manip - NF_NAT_MANIP_SRC or NF_NAT_MANIP_DST */ -int bpf_ct_set_nat_info(struct nf_conn___init *nfct, - union nf_inet_addr *addr, int port, - enum nf_nat_manip_type manip) +__bpf_kfunc int bpf_ct_set_nat_info(struct nf_conn___init *nfct, + union nf_inet_addr *addr, int port, + enum nf_nat_manip_type manip) { struct nf_conn *ct = (struct nf_conn *)nfct; u16 proto = nf_ct_l3num(ct); diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c index 600993c80050..04c4036bf406 100644 --- a/net/netlink/genetlink.c +++ b/net/netlink/genetlink.c @@ -13,7 +13,7 @@ #include <linux/errno.h> #include <linux/types.h> #include <linux/socket.h> -#include <linux/string.h> +#include <linux/string_helpers.h> #include <linux/skbuff.h> #include <linux/mutex.h> #include <linux/bitmap.h> @@ -457,7 +457,7 @@ static int genl_validate_assign_mc_groups(struct genl_family *family) if (WARN_ON(grp->name[0] == '\0')) return -EINVAL; - if (WARN_ON(memchr(grp->name, '\0', GENL_NAMSIZ) == NULL)) + if (WARN_ON(!string_is_terminated(grp->name, GENL_NAMSIZ))) return -EINVAL; } diff --git a/net/openvswitch/Kconfig b/net/openvswitch/Kconfig index 747d537a3f06..29a7081858cd 100644 --- a/net/openvswitch/Kconfig +++ b/net/openvswitch/Kconfig @@ -15,6 +15,7 @@ config OPENVSWITCH select NET_MPLS_GSO select DST_CACHE select NET_NSH + select NF_CONNTRACK_OVS if NF_CONNTRACK select NF_NAT_OVS if NF_NAT help Open vSwitch is a multilayer Ethernet switch targeted at virtualized diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c index 2172930b1f17..331730fd3580 100644 --- a/net/openvswitch/conntrack.c +++ b/net/openvswitch/conntrack.c @@ -9,6 +9,7 @@ #include <linux/udp.h> #include <linux/sctp.h> #include <linux/static_key.h> +#include <linux/string_helpers.h> #include <net/ip.h> #include <net/genetlink.h> #include <net/netfilter/nf_conntrack_core.h> @@ -434,52 +435,21 @@ static int ovs_ct_set_labels(struct nf_conn *ct, struct sw_flow_key *key, return 0; } -/* Returns 0 on success, -EINPROGRESS if 'skb' is stolen, or other nonzero - * value if 'skb' is freed. - */ -static int handle_fragments(struct net *net, struct sw_flow_key *key, - u16 zone, struct sk_buff *skb) +static int ovs_ct_handle_fragments(struct net *net, struct sw_flow_key *key, + u16 zone, int family, struct sk_buff *skb) { struct ovs_skb_cb ovs_cb = *OVS_CB(skb); int err; - if (key->eth.type == htons(ETH_P_IP)) { - enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; - - memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); - err = ip_defrag(net, skb, user); - if (err) - return err; - - ovs_cb.mru = IPCB(skb)->frag_max_size; -#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) - } else if (key->eth.type == htons(ETH_P_IPV6)) { - enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone; - - memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm)); - err = nf_ct_frag6_gather(net, skb, user); - if (err) { - if (err != -EINPROGRESS) - kfree_skb(skb); - return err; - } - - key->ip.proto = ipv6_hdr(skb)->nexthdr; - ovs_cb.mru = IP6CB(skb)->frag_max_size; -#endif - } else { - kfree_skb(skb); - return -EPFNOSUPPORT; - } + err = nf_ct_handle_fragments(net, skb, zone, family, &key->ip.proto, &ovs_cb.mru); + if (err) + return err; /* The key extracted from the fragment that completed this datagram * likely didn't have an L4 header, so regenerate it. */ ovs_flow_key_update_l3l4(skb, key); - key->ip.frag = OVS_FRAG_TYPE_NONE; - skb_clear_hash(skb); - skb->ignore_df = 1; *OVS_CB(skb) = ovs_cb; return 0; @@ -1090,36 +1060,6 @@ static int ovs_ct_commit(struct net *net, struct sw_flow_key *key, return 0; } -/* Trim the skb to the length specified by the IP/IPv6 header, - * removing any trailing lower-layer padding. This prepares the skb - * for higher-layer processing that assumes skb->len excludes padding - * (such as nf_ip_checksum). The caller needs to pull the skb to the - * network header, and ensure ip_hdr/ipv6_hdr points to valid data. - */ -static int ovs_skb_network_trim(struct sk_buff *skb) -{ - unsigned int len; - int err; - - switch (skb->protocol) { - case htons(ETH_P_IP): - len = skb_ip_totlen(skb); - break; - case htons(ETH_P_IPV6): - len = sizeof(struct ipv6hdr) - + ntohs(ipv6_hdr(skb)->payload_len); - break; - default: - len = skb->len; - } - - err = pskb_trim_rcsum(skb, len); - if (err) - kfree_skb(skb); - - return err; -} - /* Returns 0 on success, -EINPROGRESS if 'skb' is stolen, or other nonzero * value if 'skb' is freed. */ @@ -1134,12 +1074,15 @@ int ovs_ct_execute(struct net *net, struct sk_buff *skb, nh_ofs = skb_network_offset(skb); skb_pull_rcsum(skb, nh_ofs); - err = ovs_skb_network_trim(skb); - if (err) + err = nf_ct_skb_network_trim(skb, info->family); + if (err) { + kfree_skb(skb); return err; + } if (key->ip.frag != OVS_FRAG_TYPE_NONE) { - err = handle_fragments(net, key, info->zone.id, skb); + err = ovs_ct_handle_fragments(net, key, info->zone.id, + info->family, skb); if (err) return err; } @@ -1383,7 +1326,7 @@ static int parse_ct(const struct nlattr *attr, struct ovs_conntrack_info *info, #endif case OVS_CT_ATTR_HELPER: *helper = nla_data(a); - if (!memchr(*helper, '\0', nla_len(a))) { + if (!string_is_terminated(*helper, nla_len(a))) { OVS_NLERR(log, "Invalid conntrack helper"); return -EINVAL; } @@ -1404,7 +1347,7 @@ static int parse_ct(const struct nlattr *attr, struct ovs_conntrack_info *info, #ifdef CONFIG_NF_CONNTRACK_TIMEOUT case OVS_CT_ATTR_TIMEOUT: memcpy(info->timeout, nla_data(a), nla_len(a)); - if (!memchr(info->timeout, '\0', nla_len(a))) { + if (!string_is_terminated(info->timeout, nla_len(a))) { OVS_NLERR(log, "Invalid conntrack timeout"); return -EINVAL; } diff --git a/net/rds/message.c b/net/rds/message.c index b47e4f0a1639..c19c93561227 100644 --- a/net/rds/message.c +++ b/net/rds/message.c @@ -104,9 +104,9 @@ static void rds_rm_zerocopy_callback(struct rds_sock *rs, spin_lock_irqsave(&q->lock, flags); head = &q->zcookie_head; if (!list_empty(head)) { - info = list_entry(head, struct rds_msg_zcopy_info, - rs_zcookie_next); - if (info && rds_zcookie_add(info, cookie)) { + info = list_first_entry(head, struct rds_msg_zcopy_info, + rs_zcookie_next); + if (rds_zcookie_add(info, cookie)) { spin_unlock_irqrestore(&q->lock, flags); kfree(rds_info_from_znotifier(znotif)); /* caller invokes rds_wake_sk_sleep() */ diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c index 6eaffb0d8fdc..e9f1f49d18c2 100644 --- a/net/rxrpc/call_object.c +++ b/net/rxrpc/call_object.c @@ -54,12 +54,14 @@ void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what) spin_lock_bh(&local->lock); busy = !list_empty(&call->attend_link); trace_rxrpc_poke_call(call, busy, what); + if (!busy && !rxrpc_try_get_call(call, rxrpc_call_get_poke)) + busy = true; if (!busy) { - rxrpc_get_call(call, rxrpc_call_get_poke); list_add_tail(&call->attend_link, &local->call_attend_q); } spin_unlock_bh(&local->lock); - rxrpc_wake_up_io_thread(local); + if (!busy) + rxrpc_wake_up_io_thread(local); } } diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c index 44414e724415..95f4bc206b3d 100644 --- a/net/rxrpc/conn_event.c +++ b/net/rxrpc/conn_event.c @@ -163,7 +163,7 @@ void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, trace_rxrpc_tx_ack(chan->call_debug_id, serial, ntohl(pkt.ack.firstPacket), ntohl(pkt.ack.serial), - pkt.ack.reason, 0); + pkt.ack.reason, 0, rxrpc_rx_window_size); break; default: diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c index 6b2022240076..5e53429c6922 100644 --- a/net/rxrpc/output.c +++ b/net/rxrpc/output.c @@ -80,7 +80,8 @@ static void rxrpc_set_keepalive(struct rxrpc_call *call) */ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, struct rxrpc_call *call, - struct rxrpc_txbuf *txb) + struct rxrpc_txbuf *txb, + u16 *_rwind) { struct rxrpc_ackinfo ackinfo; unsigned int qsize, sack, wrap, to; @@ -124,6 +125,7 @@ static size_t rxrpc_fill_out_ack(struct rxrpc_connection *conn, jmax = rxrpc_rx_jumbo_max; qsize = (window - 1) - call->rx_consumed; rsize = max_t(int, call->rx_winsize - qsize, 0); + *_rwind = rsize; ackinfo.rxMTU = htonl(rxrpc_rx_mtu); ackinfo.maxMTU = htonl(mtu); ackinfo.rwind = htonl(rsize); @@ -190,6 +192,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) rxrpc_serial_t serial; size_t len, n; int ret, rtt_slot = -1; + u16 rwind; if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) return -ECONNRESET; @@ -205,7 +208,7 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) if (txb->ack.reason == RXRPC_ACK_PING) txb->wire.flags |= RXRPC_REQUEST_ACK; - n = rxrpc_fill_out_ack(conn, call, txb); + n = rxrpc_fill_out_ack(conn, call, txb, &rwind); if (n == 0) return 0; @@ -217,7 +220,8 @@ int rxrpc_send_ack_packet(struct rxrpc_call *call, struct rxrpc_txbuf *txb) txb->wire.serial = htonl(serial); trace_rxrpc_tx_ack(call->debug_id, serial, ntohl(txb->ack.firstPacket), - ntohl(txb->ack.serial), txb->ack.reason, txb->ack.nAcks); + ntohl(txb->ack.serial), txb->ack.reason, txb->ack.nAcks, + rwind); if (txb->ack.reason == RXRPC_ACK_PING) rtt_slot = rxrpc_begin_rtt_probe(call, serial, rxrpc_rtt_tx_ping); diff --git a/net/rxrpc/recvmsg.c b/net/rxrpc/recvmsg.c index 50d263a6359d..76eb2b9cd936 100644 --- a/net/rxrpc/recvmsg.c +++ b/net/rxrpc/recvmsg.c @@ -137,7 +137,7 @@ static void rxrpc_rotate_rx_window(struct rxrpc_call *call) /* Check to see if there's an ACK that needs sending. */ acked = atomic_add_return(call->rx_consumed - old_consumed, &call->ackr_nr_consumed); - if (acked > 2 && + if (acked > 8 && !test_and_set_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags)) rxrpc_poke_call(call, rxrpc_call_poke_idle); } diff --git a/net/rxrpc/skbuff.c b/net/rxrpc/skbuff.c index 944320e65ea8..3bcd6ee80396 100644 --- a/net/rxrpc/skbuff.c +++ b/net/rxrpc/skbuff.c @@ -63,7 +63,7 @@ void rxrpc_free_skb(struct sk_buff *skb, enum rxrpc_skb_trace why) if (skb) { int n = atomic_dec_return(select_skb_count(skb)); trace_rxrpc_skb(skb, refcount_read(&skb->users), n, why); - kfree_skb_reason(skb, SKB_CONSUMED); + consume_skb(skb); } } @@ -78,6 +78,6 @@ void rxrpc_purge_queue(struct sk_buff_head *list) int n = atomic_dec_return(select_skb_count(skb)); trace_rxrpc_skb(skb, refcount_read(&skb->users), n, rxrpc_skb_put_purge); - kfree_skb_reason(skb, SKB_CONSUMED); + consume_skb(skb); } } diff --git a/net/sched/Kconfig b/net/sched/Kconfig index f5acb535413d..4f7b52f5a11c 100644 --- a/net/sched/Kconfig +++ b/net/sched/Kconfig @@ -984,6 +984,7 @@ config NET_ACT_TUNNEL_KEY config NET_ACT_CT tristate "connection tracking tc action" depends on NET_CLS_ACT && NF_CONNTRACK && (!NF_NAT || NF_NAT) && NF_FLOW_TABLE + select NF_CONNTRACK_OVS select NF_NAT_OVS if NF_NAT help Say Y here to allow sending the packets to conntrack module. diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c index b126f03c1bb6..9cc0bc7c71ed 100644 --- a/net/sched/act_ct.c +++ b/net/sched/act_ct.c @@ -726,31 +726,6 @@ drop_ct: return false; } -/* Trim the skb to the length specified by the IP/IPv6 header, - * removing any trailing lower-layer padding. This prepares the skb - * for higher-layer processing that assumes skb->len excludes padding - * (such as nf_ip_checksum). The caller needs to pull the skb to the - * network header, and ensure ip_hdr/ipv6_hdr points to valid data. - */ -static int tcf_ct_skb_network_trim(struct sk_buff *skb, int family) -{ - unsigned int len; - - switch (family) { - case NFPROTO_IPV4: - len = skb_ip_totlen(skb); - break; - case NFPROTO_IPV6: - len = sizeof(struct ipv6hdr) - + ntohs(ipv6_hdr(skb)->payload_len); - break; - default: - len = skb->len; - } - - return pskb_trim_rcsum(skb, len); -} - static u8 tcf_ct_skb_nf_family(struct sk_buff *skb) { u8 family = NFPROTO_UNSPEC; @@ -810,6 +785,7 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb, struct nf_conn *ct; int err = 0; bool frag; + u8 proto; u16 mru; /* Previously seen (loopback)? Ignore. */ @@ -825,50 +801,14 @@ static int tcf_ct_handle_fragments(struct net *net, struct sk_buff *skb, return err; skb_get(skb); - mru = tc_skb_cb(skb)->mru; - - if (family == NFPROTO_IPV4) { - enum ip_defrag_users user = IP_DEFRAG_CONNTRACK_IN + zone; - - memset(IPCB(skb), 0, sizeof(struct inet_skb_parm)); - local_bh_disable(); - err = ip_defrag(net, skb, user); - local_bh_enable(); - if (err && err != -EINPROGRESS) - return err; - - if (!err) { - *defrag = true; - mru = IPCB(skb)->frag_max_size; - } - } else { /* NFPROTO_IPV6 */ -#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6) - enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone; - - memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm)); - err = nf_ct_frag6_gather(net, skb, user); - if (err && err != -EINPROGRESS) - goto out_free; - - if (!err) { - *defrag = true; - mru = IP6CB(skb)->frag_max_size; - } -#else - err = -EOPNOTSUPP; - goto out_free; -#endif - } + err = nf_ct_handle_fragments(net, skb, zone, family, &proto, &mru); + if (err) + return err; - if (err != -EINPROGRESS) - tc_skb_cb(skb)->mru = mru; - skb_clear_hash(skb); - skb->ignore_df = 1; - return err; + *defrag = true; + tc_skb_cb(skb)->mru = mru; -out_free: - kfree_skb(skb); - return err; + return 0; } static void tcf_ct_params_free(struct tcf_ct_params *params) @@ -1011,7 +951,7 @@ TC_INDIRECT_SCOPE int tcf_ct_act(struct sk_buff *skb, const struct tc_action *a, if (err) goto drop; - err = tcf_ct_skb_network_trim(skb, family); + err = nf_ct_skb_network_trim(skb, family); if (err) goto drop; diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index cc28e41fb745..92f2975b6a82 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -433,7 +433,7 @@ static void htb_activate_prios(struct htb_sched *q, struct htb_class *cl) while (m) { unsigned int prio = ffz(~m); - if (WARN_ON_ONCE(prio > ARRAY_SIZE(p->inner.clprio))) + if (WARN_ON_ONCE(prio >= ARRAY_SIZE(p->inner.clprio))) break; m &= ~(1 << prio); diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c index dfea27a906f2..9b47c8409231 100644 --- a/net/tipc/netlink_compat.c +++ b/net/tipc/netlink_compat.c @@ -39,6 +39,7 @@ #include "node.h" #include "net.h" #include <net/genetlink.h> +#include <linux/string_helpers.h> #include <linux/tipc_config.h> /* The legacy API had an artificial message length limit called @@ -173,11 +174,6 @@ static struct sk_buff *tipc_get_err_tlv(char *str) return buf; } -static inline bool string_is_valid(char *s, int len) -{ - return memchr(s, '\0', len) ? true : false; -} - static int __tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd, struct tipc_nl_compat_msg *msg, struct sk_buff *arg) @@ -445,7 +441,7 @@ static int tipc_nl_compat_bearer_enable(struct tipc_nl_compat_cmd_doit *cmd, return -EINVAL; len = min_t(int, len, TIPC_MAX_BEARER_NAME); - if (!string_is_valid(b->name, len)) + if (!string_is_terminated(b->name, len)) return -EINVAL; if (nla_put_string(skb, TIPC_NLA_BEARER_NAME, b->name)) @@ -486,7 +482,7 @@ static int tipc_nl_compat_bearer_disable(struct tipc_nl_compat_cmd_doit *cmd, return -EINVAL; len = min_t(int, len, TIPC_MAX_BEARER_NAME); - if (!string_is_valid(name, len)) + if (!string_is_terminated(name, len)) return -EINVAL; if (nla_put_string(skb, TIPC_NLA_BEARER_NAME, name)) @@ -584,7 +580,7 @@ static int tipc_nl_compat_link_stat_dump(struct tipc_nl_compat_msg *msg, return -EINVAL; len = min_t(int, len, TIPC_MAX_LINK_NAME); - if (!string_is_valid(name, len)) + if (!string_is_terminated(name, len)) return -EINVAL; if (strcmp(name, nla_data(link[TIPC_NLA_LINK_NAME])) != 0) @@ -819,7 +815,7 @@ static int tipc_nl_compat_link_set(struct tipc_nl_compat_cmd_doit *cmd, return -EINVAL; len = min_t(int, len, TIPC_MAX_LINK_NAME); - if (!string_is_valid(lc->name, len)) + if (!string_is_terminated(lc->name, len)) return -EINVAL; media = tipc_media_find(lc->name); @@ -856,7 +852,7 @@ static int tipc_nl_compat_link_reset_stats(struct tipc_nl_compat_cmd_doit *cmd, return -EINVAL; len = min_t(int, len, TIPC_MAX_LINK_NAME); - if (!string_is_valid(name, len)) + if (!string_is_terminated(name, len)) return -EINVAL; if (nla_put_string(skb, TIPC_NLA_LINK_NAME, name)) diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c index ed6c71826d31..b2df1e0f8153 100644 --- a/net/xdp/xsk_buff_pool.c +++ b/net/xdp/xsk_buff_pool.c @@ -140,6 +140,10 @@ static void xp_disable_drv_zc(struct xsk_buff_pool *pool) } } +#define NETDEV_XDP_ACT_ZC (NETDEV_XDP_ACT_BASIC | \ + NETDEV_XDP_ACT_REDIRECT | \ + NETDEV_XDP_ACT_XSK_ZEROCOPY) + int xp_assign_dev(struct xsk_buff_pool *pool, struct net_device *netdev, u16 queue_id, u16 flags) { @@ -178,8 +182,7 @@ int xp_assign_dev(struct xsk_buff_pool *pool, /* For copy-mode, we are done. */ return 0; - if (!netdev->netdev_ops->ndo_bpf || - !netdev->netdev_ops->ndo_xsk_wakeup) { + if ((netdev->xdp_features & NETDEV_XDP_ACT_ZC) != NETDEV_XDP_ACT_ZC) { err = -EOPNOTSUPP; goto err_unreg_pool; } diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c index a0f62fa02e06..8cbf45a8bcdc 100644 --- a/net/xfrm/xfrm_compat.c +++ b/net/xfrm/xfrm_compat.c @@ -5,6 +5,7 @@ * Based on code and translator idea by: Florian Westphal <[email protected]> */ #include <linux/compat.h> +#include <linux/nospec.h> #include <linux/xfrm.h> #include <net/xfrm.h> @@ -302,7 +303,7 @@ static int xfrm_xlate64(struct sk_buff *dst, const struct nlmsghdr *nlh_src) nla_for_each_attr(nla, attrs, len, remaining) { int err; - switch (type) { + switch (nlh_src->nlmsg_type) { case XFRM_MSG_NEWSPDINFO: err = xfrm_nla_cpy(dst, nla, nla_len(nla)); break; @@ -437,6 +438,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla, NL_SET_ERR_MSG(extack, "Bad attribute"); return -EOPNOTSUPP; } + type = array_index_nospec(type, XFRMA_MAX + 1); if (nla_len(nla) < compat_policy[type].len) { NL_SET_ERR_MSG(extack, "Attribute bad length"); return -EOPNOTSUPP; diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c index c06e54a10540..436d29640ac2 100644 --- a/net/xfrm/xfrm_input.c +++ b/net/xfrm/xfrm_input.c @@ -279,8 +279,7 @@ static int xfrm6_remove_tunnel_encap(struct xfrm_state *x, struct sk_buff *skb) goto out; if (x->props.flags & XFRM_STATE_DECAP_DSCP) - ipv6_copy_dscp(ipv6_get_dsfield(ipv6_hdr(skb)), - ipipv6_hdr(skb)); + ipv6_copy_dscp(XFRM_MODE_SKB_CB(skb)->tos, ipipv6_hdr(skb)); if (!(x->props.flags & XFRM_STATE_NOECN)) ipip6_ecn_decapsulate(skb); diff --git a/net/xfrm/xfrm_interface_bpf.c b/net/xfrm/xfrm_interface_bpf.c index 1ef2162cebcf..d74f3fd20f2b 100644 --- a/net/xfrm/xfrm_interface_bpf.c +++ b/net/xfrm/xfrm_interface_bpf.c @@ -39,8 +39,7 @@ __diag_ignore_all("-Wmissing-prototypes", * @to - Pointer to memory to which the metadata will be copied * Cannot be NULL */ -__used noinline -int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to) +__bpf_kfunc int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to) { struct sk_buff *skb = (struct sk_buff *)skb_ctx; struct xfrm_md_info *info; @@ -62,9 +61,7 @@ int bpf_skb_get_xfrm_info(struct __sk_buff *skb_ctx, struct bpf_xfrm_info *to) * @from - Pointer to memory from which the metadata will be copied * Cannot be NULL */ -__used noinline -int bpf_skb_set_xfrm_info(struct __sk_buff *skb_ctx, - const struct bpf_xfrm_info *from) +__bpf_kfunc int bpf_skb_set_xfrm_info(struct __sk_buff *skb_ctx, const struct bpf_xfrm_info *from) { struct sk_buff *skb = (struct sk_buff *)skb_ctx; struct metadata_dst *md_dst; diff --git a/net/xfrm/xfrm_interface_core.c b/net/xfrm/xfrm_interface_core.c index 1f99dc469027..35279c220bd7 100644 --- a/net/xfrm/xfrm_interface_core.c +++ b/net/xfrm/xfrm_interface_core.c @@ -310,6 +310,52 @@ static void xfrmi_scrub_packet(struct sk_buff *skb, bool xnet) skb->mark = 0; } +static int xfrmi_input(struct sk_buff *skb, int nexthdr, __be32 spi, + int encap_type, unsigned short family) +{ + struct sec_path *sp; + + sp = skb_sec_path(skb); + if (sp && (sp->len || sp->olen) && + !xfrm_policy_check(NULL, XFRM_POLICY_IN, skb, family)) + goto discard; + + XFRM_SPI_SKB_CB(skb)->family = family; + if (family == AF_INET) { + XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct iphdr, daddr); + XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip4 = NULL; + } else { + XFRM_SPI_SKB_CB(skb)->daddroff = offsetof(struct ipv6hdr, daddr); + XFRM_TUNNEL_SKB_CB(skb)->tunnel.ip6 = NULL; + } + + return xfrm_input(skb, nexthdr, spi, encap_type); +discard: + kfree_skb(skb); + return 0; +} + +static int xfrmi4_rcv(struct sk_buff *skb) +{ + return xfrmi_input(skb, ip_hdr(skb)->protocol, 0, 0, AF_INET); +} + +static int xfrmi6_rcv(struct sk_buff *skb) +{ + return xfrmi_input(skb, skb_network_header(skb)[IP6CB(skb)->nhoff], + 0, 0, AF_INET6); +} + +static int xfrmi4_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) +{ + return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET); +} + +static int xfrmi6_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type) +{ + return xfrmi_input(skb, nexthdr, spi, encap_type, AF_INET6); +} + static int xfrmi_rcv_cb(struct sk_buff *skb, int err) { const struct xfrm_mode *inner_mode; @@ -945,8 +991,8 @@ static struct pernet_operations xfrmi_net_ops = { }; static struct xfrm6_protocol xfrmi_esp6_protocol __read_mostly = { - .handler = xfrm6_rcv, - .input_handler = xfrm_input, + .handler = xfrmi6_rcv, + .input_handler = xfrmi6_input, .cb_handler = xfrmi_rcv_cb, .err_handler = xfrmi6_err, .priority = 10, @@ -996,8 +1042,8 @@ static struct xfrm6_tunnel xfrmi_ip6ip_handler __read_mostly = { #endif static struct xfrm4_protocol xfrmi_esp4_protocol __read_mostly = { - .handler = xfrm4_rcv, - .input_handler = xfrm_input, + .handler = xfrmi4_rcv, + .input_handler = xfrmi4_input, .cb_handler = xfrmi_rcv_cb, .err_handler = xfrmi4_err, .priority = 10, diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c index e9eb82c5457d..5c61ec04b839 100644 --- a/net/xfrm/xfrm_policy.c +++ b/net/xfrm/xfrm_policy.c @@ -336,7 +336,7 @@ static void xfrm_policy_timer(struct timer_list *t) } if (xp->lft.hard_use_expires_seconds) { time64_t tmo = xp->lft.hard_use_expires_seconds + - (xp->curlft.use_time ? : xp->curlft.add_time) - now; + (READ_ONCE(xp->curlft.use_time) ? : xp->curlft.add_time) - now; if (tmo <= 0) goto expired; if (tmo < next) @@ -354,7 +354,7 @@ static void xfrm_policy_timer(struct timer_list *t) } if (xp->lft.soft_use_expires_seconds) { time64_t tmo = xp->lft.soft_use_expires_seconds + - (xp->curlft.use_time ? : xp->curlft.add_time) - now; + (READ_ONCE(xp->curlft.use_time) ? : xp->curlft.add_time) - now; if (tmo <= 0) { warn = 1; tmo = XFRM_KM_TIMEOUT; @@ -3661,7 +3661,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, return 1; } - pol->curlft.use_time = ktime_get_real_seconds(); + /* This lockless write can happen from different cpus. */ + WRITE_ONCE(pol->curlft.use_time, ktime_get_real_seconds()); pols[0] = pol; npols++; @@ -3676,7 +3677,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, xfrm_pol_put(pols[0]); return 0; } - pols[1]->curlft.use_time = ktime_get_real_seconds(); + /* This write can happen from different cpus. */ + WRITE_ONCE(pols[1]->curlft.use_time, + ktime_get_real_seconds()); npols++; } } @@ -3742,6 +3745,9 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, goto reject; } + if (if_id) + secpath_reset(skb); + xfrm_pols_put(pols, npols); return 1; } diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c index 59fffa02d1cc..2ab3e09e2227 100644 --- a/net/xfrm/xfrm_state.c +++ b/net/xfrm/xfrm_state.c @@ -577,7 +577,7 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) if (x->km.state == XFRM_STATE_EXPIRED) goto expired; if (x->lft.hard_add_expires_seconds) { - long tmo = x->lft.hard_add_expires_seconds + + time64_t tmo = x->lft.hard_add_expires_seconds + x->curlft.add_time - now; if (tmo <= 0) { if (x->xflags & XFRM_SOFT_EXPIRE) { @@ -594,8 +594,8 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) next = tmo; } if (x->lft.hard_use_expires_seconds) { - long tmo = x->lft.hard_use_expires_seconds + - (x->curlft.use_time ? : now) - now; + time64_t tmo = x->lft.hard_use_expires_seconds + + (READ_ONCE(x->curlft.use_time) ? : now) - now; if (tmo <= 0) goto expired; if (tmo < next) @@ -604,7 +604,7 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) if (x->km.dying) goto resched; if (x->lft.soft_add_expires_seconds) { - long tmo = x->lft.soft_add_expires_seconds + + time64_t tmo = x->lft.soft_add_expires_seconds + x->curlft.add_time - now; if (tmo <= 0) { warn = 1; @@ -616,8 +616,8 @@ static enum hrtimer_restart xfrm_timer_handler(struct hrtimer *me) } } if (x->lft.soft_use_expires_seconds) { - long tmo = x->lft.soft_use_expires_seconds + - (x->curlft.use_time ? : now) - now; + time64_t tmo = x->lft.soft_use_expires_seconds + + (READ_ONCE(x->curlft.use_time) ? : now) - now; if (tmo <= 0) warn = 1; else if (tmo < next) @@ -1906,7 +1906,7 @@ out: hrtimer_start(&x1->mtimer, ktime_set(1, 0), HRTIMER_MODE_REL_SOFT); - if (x1->curlft.use_time) + if (READ_ONCE(x1->curlft.use_time)) xfrm_state_check_expire(x1); if (x->props.smark.m || x->props.smark.v || x->if_id) { @@ -1940,8 +1940,8 @@ int xfrm_state_check_expire(struct xfrm_state *x) { xfrm_dev_state_update_curlft(x); - if (!x->curlft.use_time) - x->curlft.use_time = ktime_get_real_seconds(); + if (!READ_ONCE(x->curlft.use_time)) + WRITE_ONCE(x->curlft.use_time, ktime_get_real_seconds()); if (x->curlft.bytes >= x->lft.hard_byte_limit || x->curlft.packets >= x->lft.hard_packet_limit) { diff --git a/samples/bpf/syscall_tp_kern.c b/samples/bpf/syscall_tp_kern.c index 50231c2eff9c..e7121dd1ee37 100644 --- a/samples/bpf/syscall_tp_kern.c +++ b/samples/bpf/syscall_tp_kern.c @@ -58,6 +58,13 @@ int trace_enter_open_at(struct syscalls_enter_open_args *ctx) return 0; } +SEC("tracepoint/syscalls/sys_enter_openat2") +int trace_enter_open_at2(struct syscalls_enter_open_args *ctx) +{ + count(&enter_open_map); + return 0; +} + SEC("tracepoint/syscalls/sys_exit_open") int trace_enter_exit(struct syscalls_exit_open_args *ctx) { @@ -71,3 +78,10 @@ int trace_enter_exit_at(struct syscalls_exit_open_args *ctx) count(&exit_open_map); return 0; } + +SEC("tracepoint/syscalls/sys_exit_openat2") +int trace_enter_exit_at2(struct syscalls_exit_open_args *ctx) +{ + count(&exit_open_map); + return 0; +} diff --git a/scripts/Makefile.modinst b/scripts/Makefile.modinst index 836391e5d209..4815a8e32227 100644 --- a/scripts/Makefile.modinst +++ b/scripts/Makefile.modinst @@ -66,9 +66,13 @@ endif # Don't stop modules_install even if we can't sign external modules. # ifeq ($(CONFIG_MODULE_SIG_ALL),y) +ifeq ($(filter pkcs11:%, $(CONFIG_MODULE_SIG_KEY)),) sig-key := $(if $(wildcard $(CONFIG_MODULE_SIG_KEY)),,$(srctree)/)$(CONFIG_MODULE_SIG_KEY) +else +sig-key := $(CONFIG_MODULE_SIG_KEY) +endif quiet_cmd_sign = SIGN $@ - cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) $(sig-key) certs/signing_key.x509 $@ \ + cmd_sign = scripts/sign-file $(CONFIG_MODULE_SIG_HASH) "$(sig-key)" certs/signing_key.x509 $@ \ $(if $(KBUILD_EXTMOD),|| true) else quiet_cmd_sign := diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c index cfc9fdc1e863..e87738dbffc1 100644 --- a/tools/bpf/bpftool/prog.c +++ b/tools/bpf/bpftool/prog.c @@ -2233,10 +2233,38 @@ static void profile_close_perf_events(struct profiler_bpf *obj) profile_perf_event_cnt = 0; } +static int profile_open_perf_event(int mid, int cpu, int map_fd) +{ + int pmu_fd; + + pmu_fd = syscall(__NR_perf_event_open, &metrics[mid].attr, + -1 /*pid*/, cpu, -1 /*group_fd*/, 0); + if (pmu_fd < 0) { + if (errno == ENODEV) { + p_info("cpu %d may be offline, skip %s profiling.", + cpu, metrics[mid].name); + profile_perf_event_cnt++; + return 0; + } + return -1; + } + + if (bpf_map_update_elem(map_fd, + &profile_perf_event_cnt, + &pmu_fd, BPF_ANY) || + ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) { + close(pmu_fd); + return -1; + } + + profile_perf_events[profile_perf_event_cnt++] = pmu_fd; + return 0; +} + static int profile_open_perf_events(struct profiler_bpf *obj) { unsigned int cpu, m; - int map_fd, pmu_fd; + int map_fd; profile_perf_events = calloc( sizeof(int), obj->rodata->num_cpu * obj->rodata->num_metric); @@ -2255,17 +2283,11 @@ static int profile_open_perf_events(struct profiler_bpf *obj) if (!metrics[m].selected) continue; for (cpu = 0; cpu < obj->rodata->num_cpu; cpu++) { - pmu_fd = syscall(__NR_perf_event_open, &metrics[m].attr, - -1/*pid*/, cpu, -1/*group_fd*/, 0); - if (pmu_fd < 0 || - bpf_map_update_elem(map_fd, &profile_perf_event_cnt, - &pmu_fd, BPF_ANY) || - ioctl(pmu_fd, PERF_EVENT_IOC_ENABLE, 0)) { + if (profile_open_perf_event(m, cpu, map_fd)) { p_err("failed to create event %s on cpu %d", metrics[m].name, cpu); return -1; } - profile_perf_events[profile_perf_event_cnt++] = pmu_fd; } } return 0; diff --git a/tools/bpf/resolve_btfids/Build b/tools/bpf/resolve_btfids/Build index ae82da03f9bf..077de3829c72 100644 --- a/tools/bpf/resolve_btfids/Build +++ b/tools/bpf/resolve_btfids/Build @@ -1,3 +1,5 @@ +hostprogs := resolve_btfids + resolve_btfids-y += main.o resolve_btfids-y += rbtree.o resolve_btfids-y += zalloc.o @@ -7,4 +9,4 @@ resolve_btfids-y += str_error_r.o $(OUTPUT)%.o: ../../lib/%.c FORCE $(call rule_mkdir) - $(call if_changed_dep,cc_o_c) + $(call if_changed_dep,host_cc_o_c) diff --git a/tools/bpf/resolve_btfids/Makefile b/tools/bpf/resolve_btfids/Makefile index daed388aa5d7..ac548a7baa73 100644 --- a/tools/bpf/resolve_btfids/Makefile +++ b/tools/bpf/resolve_btfids/Makefile @@ -17,11 +17,14 @@ else MAKEFLAGS=--no-print-directory endif -# always use the host compiler +# Overrides for the prepare step libraries. HOST_OVERRIDES := AR="$(HOSTAR)" CC="$(HOSTCC)" LD="$(HOSTLD)" ARCH="$(HOSTARCH)" \ - EXTRA_CFLAGS="$(HOSTCFLAGS) $(KBUILD_HOSTCFLAGS)" + CROSS_COMPILE="" EXTRA_CFLAGS="$(HOSTCFLAGS)" RM ?= rm +HOSTCC ?= gcc +HOSTLD ?= ld +HOSTAR ?= ar CROSS_COMPILE = OUTPUT ?= $(srctree)/tools/bpf/resolve_btfids/ @@ -64,7 +67,7 @@ $(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OU LIBELF_FLAGS := $(shell $(HOSTPKG_CONFIG) libelf --cflags 2>/dev/null) LIBELF_LIBS := $(shell $(HOSTPKG_CONFIG) libelf --libs 2>/dev/null || echo -lelf) -CFLAGS += -g \ +HOSTCFLAGS += -g \ -I$(srctree)/tools/include \ -I$(srctree)/tools/include/uapi \ -I$(LIBBPF_INCLUDE) \ @@ -73,11 +76,11 @@ CFLAGS += -g \ LIBS = $(LIBELF_LIBS) -lz -export srctree OUTPUT CFLAGS Q +export srctree OUTPUT HOSTCFLAGS Q HOSTCC HOSTLD HOSTAR include $(srctree)/tools/build/Makefile.include $(BINARY_IN): fixdep FORCE prepare | $(OUTPUT) - $(Q)$(MAKE) $(build)=resolve_btfids $(HOST_OVERRIDES) + $(Q)$(MAKE) $(build)=resolve_btfids $(BINARY): $(BPFOBJ) $(SUBCMDOBJ) $(BINARY_IN) $(call msg,LINK,$@) diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile index 8b3d87b82b7a..47acf6936516 100644 --- a/tools/bpf/runqslower/Makefile +++ b/tools/bpf/runqslower/Makefile @@ -13,6 +13,8 @@ BPF_DESTDIR := $(BPFOBJ_OUTPUT) BPF_INCLUDE := $(BPF_DESTDIR)/include INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../include/uapi) CFLAGS := -g -Wall $(CLANG_CROSS_FLAGS) +CFLAGS += $(EXTRA_CFLAGS) +LDFLAGS += $(EXTRA_LDFLAGS) # Try to detect best kernel BTF source KERNEL_REL := $(shell uname -r) diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h index 7f024ac22edd..17afd2b35ee5 100644 --- a/tools/include/uapi/linux/bpf.h +++ b/tools/include/uapi/linux/bpf.h @@ -2801,7 +2801,7 @@ union bpf_attr { * * long bpf_perf_prog_read_value(struct bpf_perf_event_data *ctx, struct bpf_perf_event_value *buf, u32 buf_size) * Description - * For en eBPF program attached to a perf event, retrieve the + * For an eBPF program attached to a perf event, retrieve the * value of the event counter associated to *ctx* and store it in * the structure pointed by *buf* and of size *buf_size*. Enabled * and running times are also stored in the structure (see @@ -5817,8 +5817,8 @@ enum { BPF_F_ADJ_ROOM_ENCAP_L4_UDP = (1ULL << 4), BPF_F_ADJ_ROOM_NO_CSUM_RESET = (1ULL << 5), BPF_F_ADJ_ROOM_ENCAP_L2_ETH = (1ULL << 6), - BPF_F_ADJ_ROOM_DECAP_L3_IPV4 = (1ULL << 7), - BPF_F_ADJ_ROOM_DECAP_L3_IPV6 = (1ULL << 8), + BPF_F_ADJ_ROOM_DECAP_L3_IPV4 = (1ULL << 7), + BPF_F_ADJ_ROOM_DECAP_L3_IPV6 = (1ULL << 8), }; enum { diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h new file mode 100644 index 000000000000..9ee459872600 --- /dev/null +++ b/tools/include/uapi/linux/netdev.h @@ -0,0 +1,59 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* Do not edit directly, auto-generated from: */ +/* Documentation/netlink/specs/netdev.yaml */ +/* YNL-GEN uapi header */ + +#ifndef _UAPI_LINUX_NETDEV_H +#define _UAPI_LINUX_NETDEV_H + +#define NETDEV_FAMILY_NAME "netdev" +#define NETDEV_FAMILY_VERSION 1 + +/** + * enum netdev_xdp_act + * @NETDEV_XDP_ACT_BASIC: XDP feautues set supported by all drivers + * (XDP_ABORTED, XDP_DROP, XDP_PASS, XDP_TX) + * @NETDEV_XDP_ACT_REDIRECT: The netdev supports XDP_REDIRECT + * @NETDEV_XDP_ACT_NDO_XMIT: This feature informs if netdev implements + * ndo_xdp_xmit callback. + * @NETDEV_XDP_ACT_XSK_ZEROCOPY: This feature informs if netdev supports AF_XDP + * in zero copy mode. + * @NETDEV_XDP_ACT_HW_OFFLOAD: This feature informs if netdev supports XDP hw + * oflloading. + * @NETDEV_XDP_ACT_RX_SG: This feature informs if netdev implements non-linear + * XDP buffer support in the driver napi callback. + * @NETDEV_XDP_ACT_NDO_XMIT_SG: This feature informs if netdev implements + * non-linear XDP buffer support in ndo_xdp_xmit callback. + */ +enum netdev_xdp_act { + NETDEV_XDP_ACT_BASIC = 1, + NETDEV_XDP_ACT_REDIRECT = 2, + NETDEV_XDP_ACT_NDO_XMIT = 4, + NETDEV_XDP_ACT_XSK_ZEROCOPY = 8, + NETDEV_XDP_ACT_HW_OFFLOAD = 16, + NETDEV_XDP_ACT_RX_SG = 32, + NETDEV_XDP_ACT_NDO_XMIT_SG = 64, +}; + +enum { + NETDEV_A_DEV_IFINDEX = 1, + NETDEV_A_DEV_PAD, + NETDEV_A_DEV_XDP_FEATURES, + + __NETDEV_A_DEV_MAX, + NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1) +}; + +enum { + NETDEV_CMD_DEV_GET = 1, + NETDEV_CMD_DEV_ADD_NTF, + NETDEV_CMD_DEV_DEL_NTF, + NETDEV_CMD_DEV_CHANGE_NTF, + + __NETDEV_CMD_MAX, + NETDEV_CMD_MAX = (__NETDEV_CMD_MAX - 1) +}; + +#define NETDEV_MCGRP_MGMT "mgmt" + +#endif /* _UAPI_LINUX_NETDEV_H */ diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h index 496e6a8ee0dc..1ac57bb7ac55 100644 --- a/tools/lib/bpf/bpf_core_read.h +++ b/tools/lib/bpf/bpf_core_read.h @@ -364,7 +364,7 @@ enum bpf_enum_value_kind { /* Non-CO-RE variant of BPF_CORE_READ_INTO() */ #define BPF_PROBE_READ_INTO(dst, src, a, ...) ({ \ - ___core_read(bpf_probe_read, bpf_probe_read, \ + ___core_read(bpf_probe_read_kernel, bpf_probe_read_kernel, \ dst, (src), a, ##__VA_ARGS__) \ }) @@ -400,7 +400,7 @@ enum bpf_enum_value_kind { /* Non-CO-RE variant of BPF_CORE_READ_STR_INTO() */ #define BPF_PROBE_READ_STR_INTO(dst, src, a, ...) ({ \ - ___core_read(bpf_probe_read_str, bpf_probe_read, \ + ___core_read(bpf_probe_read_kernel_str, bpf_probe_read_kernel, \ dst, (src), a, ##__VA_ARGS__) \ }) diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h index d37c4fe2849d..5ec1871acb2f 100644 --- a/tools/lib/bpf/bpf_helpers.h +++ b/tools/lib/bpf/bpf_helpers.h @@ -109,7 +109,7 @@ * This is a variable-specific variant of more global barrier(). */ #ifndef barrier_var -#define barrier_var(var) asm volatile("" : "=r"(var) : "0"(var)) +#define barrier_var(var) asm volatile("" : "+r"(var)) #endif /* diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c index eed5cec6f510..35a698eb825d 100644 --- a/tools/lib/bpf/libbpf.c +++ b/tools/lib/bpf/libbpf.c @@ -34,7 +34,6 @@ #include <linux/limits.h> #include <linux/perf_event.h> #include <linux/ring_buffer.h> -#include <linux/version.h> #include <sys/epoll.h> #include <sys/ioctl.h> #include <sys/mman.h> @@ -870,42 +869,6 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data, return 0; } -__u32 get_kernel_version(void) -{ - /* On Ubuntu LINUX_VERSION_CODE doesn't correspond to info.release, - * but Ubuntu provides /proc/version_signature file, as described at - * https://ubuntu.com/kernel, with an example contents below, which we - * can use to get a proper LINUX_VERSION_CODE. - * - * Ubuntu 5.4.0-12.15-generic 5.4.8 - * - * In the above, 5.4.8 is what kernel is actually expecting, while - * uname() call will return 5.4.0 in info.release. - */ - const char *ubuntu_kver_file = "/proc/version_signature"; - __u32 major, minor, patch; - struct utsname info; - - if (faccessat(AT_FDCWD, ubuntu_kver_file, R_OK, AT_EACCESS) == 0) { - FILE *f; - - f = fopen(ubuntu_kver_file, "r"); - if (f) { - if (fscanf(f, "%*s %*s %d.%d.%d\n", &major, &minor, &patch) == 3) { - fclose(f); - return KERNEL_VERSION(major, minor, patch); - } - fclose(f); - } - /* something went wrong, fall back to uname() approach */ - } - - uname(&info); - if (sscanf(info.release, "%u.%u.%u", &major, &minor, &patch) != 3) - return 0; - return KERNEL_VERSION(major, minor, patch); -} - static const struct btf_member * find_member_by_offset(const struct btf_type *t, __u32 bit_offset) { @@ -11710,17 +11673,22 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt, const size_t attr_sz = sizeof(struct perf_event_attr); struct perf_buffer_params p = {}; struct perf_event_attr attr; + __u32 sample_period; if (!OPTS_VALID(opts, perf_buffer_opts)) return libbpf_err_ptr(-EINVAL); + sample_period = OPTS_GET(opts, sample_period, 1); + if (!sample_period) + sample_period = 1; + memset(&attr, 0, attr_sz); attr.size = attr_sz; attr.config = PERF_COUNT_SW_BPF_OUTPUT; attr.type = PERF_TYPE_SOFTWARE; attr.sample_type = PERF_SAMPLE_RAW; - attr.sample_period = 1; - attr.wakeup_events = 1; + attr.sample_period = sample_period; + attr.wakeup_events = sample_period; p.attr = &attr; p.sample_cb = sample_cb; diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h index 8777ff21ea1d..2efd80f6f7b9 100644 --- a/tools/lib/bpf/libbpf.h +++ b/tools/lib/bpf/libbpf.h @@ -1048,9 +1048,10 @@ struct bpf_xdp_query_opts { __u32 hw_prog_id; /* output */ __u32 skb_prog_id; /* output */ __u8 attach_mode; /* output */ + __u64 feature_flags; /* output */ size_t :0; }; -#define bpf_xdp_query_opts__last_field attach_mode +#define bpf_xdp_query_opts__last_field feature_flags LIBBPF_API int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags, const struct bpf_xdp_attach_opts *opts); @@ -1246,8 +1247,10 @@ typedef void (*perf_buffer_lost_fn)(void *ctx, int cpu, __u64 cnt); /* common use perf buffer options */ struct perf_buffer_opts { size_t sz; + __u32 sample_period; + size_t :0; }; -#define perf_buffer_opts__last_field sz +#define perf_buffer_opts__last_field sample_period /** * @brief **perf_buffer__new()** creates BPF perfbuf manager for a specified diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c index b44fcbb4b42e..4f3bc968ff8e 100644 --- a/tools/lib/bpf/libbpf_probes.c +++ b/tools/lib/bpf/libbpf_probes.c @@ -12,11 +12,94 @@ #include <linux/btf.h> #include <linux/filter.h> #include <linux/kernel.h> +#include <linux/version.h> #include "bpf.h" #include "libbpf.h" #include "libbpf_internal.h" +/* On Ubuntu LINUX_VERSION_CODE doesn't correspond to info.release, + * but Ubuntu provides /proc/version_signature file, as described at + * https://ubuntu.com/kernel, with an example contents below, which we + * can use to get a proper LINUX_VERSION_CODE. + * + * Ubuntu 5.4.0-12.15-generic 5.4.8 + * + * In the above, 5.4.8 is what kernel is actually expecting, while + * uname() call will return 5.4.0 in info.release. + */ +static __u32 get_ubuntu_kernel_version(void) +{ + const char *ubuntu_kver_file = "/proc/version_signature"; + __u32 major, minor, patch; + int ret; + FILE *f; + + if (faccessat(AT_FDCWD, ubuntu_kver_file, R_OK, AT_EACCESS) != 0) + return 0; + + f = fopen(ubuntu_kver_file, "r"); + if (!f) + return 0; + + ret = fscanf(f, "%*s %*s %u.%u.%u\n", &major, &minor, &patch); + fclose(f); + if (ret != 3) + return 0; + + return KERNEL_VERSION(major, minor, patch); +} + +/* On Debian LINUX_VERSION_CODE doesn't correspond to info.release. + * Instead, it is provided in info.version. An example content of + * Debian 10 looks like the below. + * + * utsname::release 4.19.0-22-amd64 + * utsname::version #1 SMP Debian 4.19.260-1 (2022-09-29) + * + * In the above, 4.19.260 is what kernel is actually expecting, while + * uname() call will return 4.19.0 in info.release. + */ +static __u32 get_debian_kernel_version(struct utsname *info) +{ + __u32 major, minor, patch; + char *p; + + p = strstr(info->version, "Debian "); + if (!p) { + /* This is not a Debian kernel. */ + return 0; + } + + if (sscanf(p, "Debian %u.%u.%u", &major, &minor, &patch) != 3) + return 0; + + return KERNEL_VERSION(major, minor, patch); +} + +__u32 get_kernel_version(void) +{ + __u32 major, minor, patch, version; + struct utsname info; + + /* Check if this is an Ubuntu kernel. */ + version = get_ubuntu_kernel_version(); + if (version != 0) + return version; + + uname(&info); + + /* Check if this is a Debian kernel. */ + version = get_debian_kernel_version(&info); + if (version != 0) + return version; + + if (sscanf(info.release, "%u.%u.%u", &major, &minor, &patch) != 3) + return 0; + + return KERNEL_VERSION(major, minor, patch); +} + static int probe_prog_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns, size_t insns_cnt, char *log_buf, size_t log_buf_sz) diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c index 35104580870c..cb082a04ffa8 100644 --- a/tools/lib/bpf/netlink.c +++ b/tools/lib/bpf/netlink.c @@ -9,6 +9,7 @@ #include <linux/if_ether.h> #include <linux/pkt_cls.h> #include <linux/rtnetlink.h> +#include <linux/netdev.h> #include <sys/socket.h> #include <errno.h> #include <time.h> @@ -39,9 +40,15 @@ struct xdp_id_md { int ifindex; __u32 flags; struct xdp_link_info info; + __u64 feature_flags; }; -static int libbpf_netlink_open(__u32 *nl_pid) +struct xdp_features_md { + int ifindex; + __u64 flags; +}; + +static int libbpf_netlink_open(__u32 *nl_pid, int proto) { struct sockaddr_nl sa; socklen_t addrlen; @@ -51,7 +58,7 @@ static int libbpf_netlink_open(__u32 *nl_pid) memset(&sa, 0, sizeof(sa)); sa.nl_family = AF_NETLINK; - sock = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, NETLINK_ROUTE); + sock = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, proto); if (sock < 0) return -errno; @@ -212,14 +219,14 @@ done: } static int libbpf_netlink_send_recv(struct libbpf_nla_req *req, - __dump_nlmsg_t parse_msg, + int proto, __dump_nlmsg_t parse_msg, libbpf_dump_nlmsg_t parse_attr, void *cookie) { __u32 nl_pid = 0; int sock, ret; - sock = libbpf_netlink_open(&nl_pid); + sock = libbpf_netlink_open(&nl_pid, proto); if (sock < 0) return sock; @@ -238,6 +245,43 @@ out: return ret; } +static int parse_genl_family_id(struct nlmsghdr *nh, libbpf_dump_nlmsg_t fn, + void *cookie) +{ + struct genlmsghdr *gnl = NLMSG_DATA(nh); + struct nlattr *na = (struct nlattr *)((void *)gnl + GENL_HDRLEN); + struct nlattr *tb[CTRL_ATTR_FAMILY_ID + 1]; + __u16 *id = cookie; + + libbpf_nla_parse(tb, CTRL_ATTR_FAMILY_ID, na, + NLMSG_PAYLOAD(nh, sizeof(*gnl)), NULL); + if (!tb[CTRL_ATTR_FAMILY_ID]) + return NL_CONT; + + *id = libbpf_nla_getattr_u16(tb[CTRL_ATTR_FAMILY_ID]); + return NL_DONE; +} + +static int libbpf_netlink_resolve_genl_family_id(const char *name, + __u16 len, __u16 *id) +{ + struct libbpf_nla_req req = { + .nh.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN), + .nh.nlmsg_type = GENL_ID_CTRL, + .nh.nlmsg_flags = NLM_F_REQUEST, + .gnl.cmd = CTRL_CMD_GETFAMILY, + .gnl.version = 2, + }; + int err; + + err = nlattr_add(&req, CTRL_ATTR_FAMILY_NAME, name, len); + if (err < 0) + return err; + + return libbpf_netlink_send_recv(&req, NETLINK_GENERIC, + parse_genl_family_id, NULL, id); +} + static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd, __u32 flags) { @@ -271,7 +315,7 @@ static int __bpf_set_link_xdp_fd_replace(int ifindex, int fd, int old_fd, } nlattr_end_nested(&req, nla); - return libbpf_netlink_send_recv(&req, NULL, NULL, NULL); + return libbpf_netlink_send_recv(&req, NETLINK_ROUTE, NULL, NULL, NULL); } int bpf_xdp_attach(int ifindex, int prog_fd, __u32 flags, const struct bpf_xdp_attach_opts *opts) @@ -357,6 +401,29 @@ static int get_xdp_info(void *cookie, void *msg, struct nlattr **tb) return 0; } +static int parse_xdp_features(struct nlmsghdr *nh, libbpf_dump_nlmsg_t fn, + void *cookie) +{ + struct genlmsghdr *gnl = NLMSG_DATA(nh); + struct nlattr *na = (struct nlattr *)((void *)gnl + GENL_HDRLEN); + struct nlattr *tb[NETDEV_CMD_MAX + 1]; + struct xdp_features_md *md = cookie; + __u32 ifindex; + + libbpf_nla_parse(tb, NETDEV_CMD_MAX, na, + NLMSG_PAYLOAD(nh, sizeof(*gnl)), NULL); + + if (!tb[NETDEV_A_DEV_IFINDEX] || !tb[NETDEV_A_DEV_XDP_FEATURES]) + return NL_CONT; + + ifindex = libbpf_nla_getattr_u32(tb[NETDEV_A_DEV_IFINDEX]); + if (ifindex != md->ifindex) + return NL_CONT; + + md->flags = libbpf_nla_getattr_u64(tb[NETDEV_A_DEV_XDP_FEATURES]); + return NL_DONE; +} + int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts) { struct libbpf_nla_req req = { @@ -366,6 +433,10 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts) .ifinfo.ifi_family = AF_PACKET, }; struct xdp_id_md xdp_id = {}; + struct xdp_features_md md = { + .ifindex = ifindex, + }; + __u16 id; int err; if (!OPTS_VALID(opts, bpf_xdp_query_opts)) @@ -382,7 +453,7 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts) xdp_id.ifindex = ifindex; xdp_id.flags = xdp_flags; - err = libbpf_netlink_send_recv(&req, __dump_link_nlmsg, + err = libbpf_netlink_send_recv(&req, NETLINK_ROUTE, __dump_link_nlmsg, get_xdp_info, &xdp_id); if (err) return libbpf_err(err); @@ -393,6 +464,31 @@ int bpf_xdp_query(int ifindex, int xdp_flags, struct bpf_xdp_query_opts *opts) OPTS_SET(opts, skb_prog_id, xdp_id.info.skb_prog_id); OPTS_SET(opts, attach_mode, xdp_id.info.attach_mode); + if (!OPTS_HAS(opts, feature_flags)) + return 0; + + err = libbpf_netlink_resolve_genl_family_id("netdev", sizeof("netdev"), &id); + if (err < 0) + return libbpf_err(err); + + memset(&req, 0, sizeof(req)); + req.nh.nlmsg_len = NLMSG_LENGTH(GENL_HDRLEN); + req.nh.nlmsg_flags = NLM_F_REQUEST; + req.nh.nlmsg_type = id; + req.gnl.cmd = NETDEV_CMD_DEV_GET; + req.gnl.version = 2; + + err = nlattr_add(&req, NETDEV_A_DEV_IFINDEX, &ifindex, sizeof(ifindex)); + if (err < 0) + return libbpf_err(err); + + err = libbpf_netlink_send_recv(&req, NETLINK_GENERIC, + parse_xdp_features, NULL, &md); + if (err) + return libbpf_err(err); + + opts->feature_flags = md.flags; + return 0; } @@ -493,7 +589,7 @@ static int tc_qdisc_modify(struct bpf_tc_hook *hook, int cmd, int flags) if (ret < 0) return ret; - return libbpf_netlink_send_recv(&req, NULL, NULL, NULL); + return libbpf_netlink_send_recv(&req, NETLINK_ROUTE, NULL, NULL, NULL); } static int tc_qdisc_create_excl(struct bpf_tc_hook *hook) @@ -673,7 +769,8 @@ int bpf_tc_attach(const struct bpf_tc_hook *hook, struct bpf_tc_opts *opts) info.opts = opts; - ret = libbpf_netlink_send_recv(&req, get_tc_info, NULL, &info); + ret = libbpf_netlink_send_recv(&req, NETLINK_ROUTE, get_tc_info, NULL, + &info); if (ret < 0) return libbpf_err(ret); if (!info.processed) @@ -739,7 +836,7 @@ static int __bpf_tc_detach(const struct bpf_tc_hook *hook, return ret; } - return libbpf_netlink_send_recv(&req, NULL, NULL, NULL); + return libbpf_netlink_send_recv(&req, NETLINK_ROUTE, NULL, NULL, NULL); } int bpf_tc_detach(const struct bpf_tc_hook *hook, @@ -804,7 +901,8 @@ int bpf_tc_query(const struct bpf_tc_hook *hook, struct bpf_tc_opts *opts) info.opts = opts; - ret = libbpf_netlink_send_recv(&req, get_tc_info, NULL, &info); + ret = libbpf_netlink_send_recv(&req, NETLINK_ROUTE, get_tc_info, NULL, + &info); if (ret < 0) return libbpf_err(ret); if (!info.processed) diff --git a/tools/lib/bpf/nlattr.c b/tools/lib/bpf/nlattr.c index 3900d052ed19..975e265eab3b 100644 --- a/tools/lib/bpf/nlattr.c +++ b/tools/lib/bpf/nlattr.c @@ -178,7 +178,7 @@ int libbpf_nla_dump_errormsg(struct nlmsghdr *nlh) hlen += nlmsg_len(&err->msg); attr = (struct nlattr *) ((void *) err + hlen); - alen = nlh->nlmsg_len - hlen; + alen = (void *)nlh + nlh->nlmsg_len - (void *)attr; if (libbpf_nla_parse(tb, NLMSGERR_ATTR_MAX, attr, alen, extack_policy) != 0) { diff --git a/tools/lib/bpf/nlattr.h b/tools/lib/bpf/nlattr.h index 4d15ae2ff812..d92d1c1de700 100644 --- a/tools/lib/bpf/nlattr.h +++ b/tools/lib/bpf/nlattr.h @@ -14,6 +14,7 @@ #include <errno.h> #include <linux/netlink.h> #include <linux/rtnetlink.h> +#include <linux/genetlink.h> /* avoid multiple definition of netlink features */ #define __LINUX_NETLINK_H @@ -58,6 +59,7 @@ struct libbpf_nla_req { union { struct ifinfomsg ifinfo; struct tcmsg tc; + struct genlmsghdr gnl; }; char buf[128]; }; @@ -89,11 +91,21 @@ static inline uint8_t libbpf_nla_getattr_u8(const struct nlattr *nla) return *(uint8_t *)libbpf_nla_data(nla); } +static inline uint16_t libbpf_nla_getattr_u16(const struct nlattr *nla) +{ + return *(uint16_t *)libbpf_nla_data(nla); +} + static inline uint32_t libbpf_nla_getattr_u32(const struct nlattr *nla) { return *(uint32_t *)libbpf_nla_data(nla); } +static inline uint64_t libbpf_nla_getattr_u64(const struct nlattr *nla) +{ + return *(uint64_t *)libbpf_nla_data(nla); +} + static inline const char *libbpf_nla_getattr_str(const struct nlattr *nla) { return (const char *)libbpf_nla_data(nla); diff --git a/tools/lib/bpf/usdt.bpf.h b/tools/lib/bpf/usdt.bpf.h index fdfd235e52c4..0bd4c135acc2 100644 --- a/tools/lib/bpf/usdt.bpf.h +++ b/tools/lib/bpf/usdt.bpf.h @@ -130,7 +130,10 @@ int bpf_usdt_arg(struct pt_regs *ctx, __u64 arg_num, long *res) if (!spec) return -ESRCH; - if (arg_num >= BPF_USDT_MAX_ARG_CNT || arg_num >= spec->arg_cnt) + if (arg_num >= BPF_USDT_MAX_ARG_CNT) + return -ENOENT; + barrier_var(arg_num); + if (arg_num >= spec->arg_cnt) return -ENOENT; arg_spec = &spec->args[arg_num]; diff --git a/tools/testing/selftests/bpf/.gitignore b/tools/testing/selftests/bpf/.gitignore index 4aa5bba956ff..116fecf80ca1 100644 --- a/tools/testing/selftests/bpf/.gitignore +++ b/tools/testing/selftests/bpf/.gitignore @@ -48,3 +48,4 @@ xskxceiver xdp_redirect_multi xdp_synproxy xdp_hw_metadata +xdp_features diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x index 50924611e5bb..b89eb87034e4 100644 --- a/tools/testing/selftests/bpf/DENYLIST.s390x +++ b/tools/testing/selftests/bpf/DENYLIST.s390x @@ -1,93 +1,24 @@ # TEMPORARY # Alphabetical order -atomics # attach(add): actual -524 <= expected 0 (trampoline) bloom_filter_map # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?) bpf_cookie # failed to open_and_load program: -524 (trampoline) -bpf_iter_setsockopt # JIT does not support calling kernel function (kfunc) bpf_loop # attaches to __x64_sys_nanosleep -bpf_mod_race # BPF trampoline -bpf_nf # JIT does not support calling kernel function -bpf_tcp_ca # JIT does not support calling kernel function (kfunc) -cb_refs # expected error message unexpected error: -524 (trampoline) -cgroup_hierarchical_stats # JIT does not support calling kernel function (kfunc) -cgrp_kfunc # JIT does not support calling kernel function cgrp_local_storage # prog_attach unexpected error: -524 (trampoline) -core_read_macros # unknown func bpf_probe_read#4 (overlapping) -cpumask # JIT does not support calling kernel function -d_path # failed to auto-attach program 'prog_stat': -524 (trampoline) -decap_sanity # JIT does not support calling kernel function (kfunc) -deny_namespace # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) -dummy_st_ops # test_run unexpected error: -524 (errno 524) (trampoline) -fentry_fexit # fentry attach failed: -524 (trampoline) -fentry_test # fentry_first_attach unexpected error: -524 (trampoline) -fexit_bpf2bpf # freplace_attach_trace unexpected error: -524 (trampoline) fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline) -fexit_stress # fexit attach failed prog 0 failed: -524 (trampoline) -fexit_test # fexit_first_attach unexpected error: -524 (trampoline) -get_func_args_test # trampoline -get_func_ip_test # get_func_ip_test__attach unexpected error: -524 (trampoline) get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace) -htab_update # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) -jit_probe_mem # jit_probe_mem__open_and_load unexpected error: -524 (kfunc) -kfree_skb # attach fentry unexpected error: -524 (trampoline) -kfunc_call # 'bpf_prog_active': not found in kernel BTF (?) -kfunc_dynptr_param # JIT does not support calling kernel function (kfunc) kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95 kprobe_multi_test # relies on fentry ksyms_module # test_ksyms_module__open_and_load unexpected error: -9 (?) ksyms_module_libbpf # JIT does not support calling kernel function (kfunc) ksyms_module_lskel # test_ksyms_module_lskel__open_and_load unexpected error: -9 (?) -libbpf_get_fd_by_id_opts # failed to attach: ERROR: strerror_r(-524)=22 (trampoline) -linked_list # JIT does not support calling kernel function (kfunc) -lookup_key # JIT does not support calling kernel function (kfunc) -lru_bug # prog 'printk': failed to auto-attach: -524 -map_kptr # failed to open_and_load program: -524 (trampoline) -modify_return # modify_return attach failed: -524 (trampoline) module_attach # skel_attach skeleton attach failed: -524 (trampoline) -mptcp -nested_trust # JIT does not support calling kernel function -netcnt # failed to load BPF skeleton 'netcnt_prog': -7 (?) -probe_user # check_kprobe_res wrong kprobe res from probe read (?) -rcu_read_lock # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?) -recursion # skel_attach unexpected error: -524 (trampoline) ringbuf # skel_load skeleton load failed (?) -select_reuseport # intermittently fails on new s390x setup -send_signal # intermittently fails to receive signal -setget_sockopt # attach unexpected error: -524 (trampoline) -sk_assign # Can't read on server: Invalid argument (?) -sk_lookup # endianness problem -sk_storage_tracing # test_sk_storage_tracing__attach unexpected error: -524 (trampoline) -skc_to_unix_sock # could not attach BPF object unexpected error: -524 (trampoline) -socket_cookie # prog_attach unexpected error: -524 (trampoline) stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?) -tailcalls # tail_calls are not allowed in non-JITed programs with bpf-to-bpf calls (?) -task_kfunc # JIT does not support calling kernel function -task_local_storage # failed to auto-attach program 'trace_exit_creds': -524 (trampoline) -test_bpffs # bpffs test failed 255 (iterator) -test_bprm_opts # failed to auto-attach program 'secure_exec': -524 (trampoline) -test_ima # failed to auto-attach program 'ima': -524 (trampoline) -test_local_storage # failed to auto-attach program 'unlink_hook': -524 (trampoline) test_lsm # attach unexpected error: -524 (trampoline) -test_overhead # attach_fentry unexpected error: -524 (trampoline) -test_profiler # unknown func bpf_probe_read_str#45 (overlapping) -timer # failed to auto-attach program 'test1': -524 (trampoline) -timer_crash # trampoline -timer_mim # failed to auto-attach program 'test1': -524 (trampoline) -trace_ext # failed to auto-attach program 'test_pkt_md_access_new': -524 (trampoline) trace_printk # trace_printk__load unexpected error: -2 (errno 2) (?) trace_vprintk # trace_vprintk__open_and_load unexpected error: -9 (?) -tracing_struct # failed to auto-attach: -524 (trampoline) -trampoline_count # prog 'prog1': failed to attach: ERROR: strerror_r(-524)=22 (trampoline) -type_cast # JIT does not support calling kernel function unpriv_bpf_disabled # fentry user_ringbuf # failed to find kernel BTF type ID of '__s390x_sys_prctl': -3 (?) verif_stats # trace_vprintk__open_and_load unexpected error: -9 (?) -verify_pkcs7_sig # JIT does not support calling kernel function (kfunc) -vmlinux # failed to auto-attach program 'handle__fentry': -524 (trampoline) -xdp_adjust_tail # case-128 err 0 errno 28 retval 1 size 128 expect-size 3520 (?) xdp_bonding # failed to auto-attach program 'trace_on_entry': -524 (trampoline) -xdp_bpf2bpf # failed to auto-attach program 'trace_on_entry': -524 (trampoline) -xdp_do_redirect # prog_run_max_size unexpected error: -22 (errno 22) xdp_metadata # JIT does not support calling kernel function (kfunc) -xdp_synproxy # JIT does not support calling kernel function (kfunc) -xfrm_info # JIT does not support calling kernel function (kfunc) diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile index 53eae7be8dff..c4b5c44cdee2 100644 --- a/tools/testing/selftests/bpf/Makefile +++ b/tools/testing/selftests/bpf/Makefile @@ -22,10 +22,11 @@ endif BPF_GCC ?= $(shell command -v bpf-gcc;) SAN_CFLAGS ?= +SAN_LDFLAGS ?= $(SAN_CFLAGS) CFLAGS += -g -O0 -rdynamic -Wall -Werror $(GENFLAGS) $(SAN_CFLAGS) \ -I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \ -I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT) -LDFLAGS += $(SAN_CFLAGS) +LDFLAGS += $(SAN_LDFLAGS) LDLIBS += -lelf -lz -lrt -lpthread # Silence some warnings when compiled with clang @@ -73,7 +74,8 @@ TEST_PROGS := test_kmod.sh \ test_bpftool.sh \ test_bpftool_metadata.sh \ test_doc_build.sh \ - test_xsk.sh + test_xsk.sh \ + test_xdp_features.sh TEST_PROGS_EXTENDED := with_addr.sh \ with_tunnels.sh ima_setup.sh verify_sig_setup.sh \ @@ -83,7 +85,8 @@ TEST_PROGS_EXTENDED := with_addr.sh \ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \ flow_dissector_load test_flow_dissector test_tcp_check_syncookie_user \ test_lirc_mode2_user xdping test_cpp runqslower bench bpf_testmod.ko \ - xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata + xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \ + xdp_features TEST_CUSTOM_PROGS = $(OUTPUT)/urandom_read $(OUTPUT)/sign-file TEST_GEN_FILES += liburandom_read.so @@ -189,7 +192,7 @@ $(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c $(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_read.so $(call msg,BINARY,,$@) $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \ - liburandom_read.so $(filter-out -static,$(LDLIBS)) \ + -lurandom_read $(filter-out -static,$(LDLIBS)) -L$(OUTPUT) \ -fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \ -Wl,-rpath=. -o $@ @@ -212,7 +215,9 @@ $(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) $(RUNQSLOWER_OUTPUT) OUTPUT=$(RUNQSLOWER_OUTPUT) VMLINUX_BTF=$(VMLINUX_BTF) \ BPFTOOL_OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \ BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf \ - BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) && \ + BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) \ + EXTRA_CFLAGS='-g -O0 $(SAN_CFLAGS)' \ + EXTRA_LDFLAGS='$(SAN_LDFLAGS)' && \ cp $(RUNQSLOWER_OUTPUT)runqslower $@ TEST_GEN_PROGS_EXTENDED += $(DEFAULT_BPFTOOL) @@ -246,7 +251,7 @@ BPFTOOL ?= $(DEFAULT_BPFTOOL) $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \ $(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/bpftool $(Q)$(MAKE) $(submake_extras) -C $(BPFTOOLDIR) \ - ARCH= CROSS_COMPILE= CC=$(HOSTCC) LD=$(HOSTLD) \ + ARCH= CROSS_COMPILE= CC="$(HOSTCC)" LD="$(HOSTLD)" \ EXTRA_CFLAGS='-g -O0' \ OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \ LIBBPF_OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \ @@ -269,7 +274,8 @@ $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \ $(APIDIR)/linux/bpf.h \ | $(BUILD_DIR)/libbpf $(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) OUTPUT=$(BUILD_DIR)/libbpf/ \ - EXTRA_CFLAGS='-g -O0' \ + EXTRA_CFLAGS='-g -O0 $(SAN_CFLAGS)' \ + EXTRA_LDFLAGS='$(SAN_LDFLAGS)' \ DESTDIR=$(SCRATCH_DIR) prefix= all install_headers ifneq ($(BPFOBJ),$(HOST_BPFOBJ)) @@ -278,7 +284,8 @@ $(HOST_BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \ | $(HOST_BUILD_DIR)/libbpf $(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) \ EXTRA_CFLAGS='-g -O0' ARCH= CROSS_COMPILE= \ - OUTPUT=$(HOST_BUILD_DIR)/libbpf/ CC=$(HOSTCC) LD=$(HOSTLD) \ + OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \ + CC="$(HOSTCC)" LD="$(HOSTLD)" \ DESTDIR=$(HOST_SCRATCH_DIR)/ prefix= all install_headers endif @@ -299,7 +306,7 @@ $(RESOLVE_BTFIDS): $(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/resolve_btfids \ $(TOOLSDIR)/lib/ctype.c \ $(TOOLSDIR)/lib/str_error_r.c $(Q)$(MAKE) $(submake_extras) -C $(TOOLSDIR)/bpf/resolve_btfids \ - CC=$(HOSTCC) LD=$(HOSTLD) AR=$(HOSTAR) \ + CC="$(HOSTCC)" LD="$(HOSTLD)" AR="$(HOSTAR)" \ LIBBPF_INCLUDE=$(HOST_INCLUDE_DIR) \ OUTPUT=$(HOST_BUILD_DIR)/resolve_btfids/ BPFOBJ=$(HOST_BPFOBJ) @@ -385,6 +392,7 @@ test_subskeleton_lib.skel.h-deps := test_subskeleton_lib2.bpf.o test_subskeleton test_usdt.skel.h-deps := test_usdt.bpf.o test_usdt_multispec.bpf.o xsk_xdp_progs.skel.h-deps := xsk_xdp_progs.bpf.o xdp_hw_metadata.skel.h-deps := xdp_hw_metadata.bpf.o +xdp_features.skel.h-deps := xdp_features.bpf.o LINKED_BPF_SRCS := $(patsubst %.bpf.o,%.c,$(foreach skel,$(LINKED_SKELS),$($(skel)-deps))) @@ -519,7 +527,8 @@ $(OUTPUT)/$(TRUNNER_BINARY): $(TRUNNER_TEST_OBJS) \ $$(call msg,BINARY,,$$@) $(Q)$$(CC) $$(CFLAGS) $$(filter %.a %.o,$$^) $$(LDLIBS) -o $$@ $(Q)$(RESOLVE_BTFIDS) --btf $(TRUNNER_OUTPUT)/btf_data.bpf.o $$@ - $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool $(if $2,$2/)bpftool + $(Q)ln -sf $(if $2,..,.)/tools/build/bpftool/bootstrap/bpftool \ + $(OUTPUT)/$(if $2,$2/)bpftool endef @@ -586,6 +595,10 @@ $(OUTPUT)/xdp_hw_metadata: xdp_hw_metadata.c $(OUTPUT)/network_helpers.o $(OUTPU $(call msg,BINARY,,$@) $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ +$(OUTPUT)/xdp_features: xdp_features.c $(OUTPUT)/network_helpers.o $(OUTPUT)/xdp_features.skel.h | $(OUTPUT) + $(call msg,BINARY,,$@) + $(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@ + # Make sure we are able to include and link libbpf against c++. $(OUTPUT)/test_cpp: test_cpp.cpp $(OUTPUT)/test_core_extern.skel.h $(BPFOBJ) $(call msg,CXX,,$@) diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c index 5085fea3cac5..46500636d8cd 100644 --- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c +++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c @@ -59,7 +59,7 @@ bpf_testmod_test_struct_arg_5(void) { return bpf_testmod_test_struct_arg_result; } -noinline void +__bpf_kfunc void bpf_testmod_test_mod_kfunc(int i) { *(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i; diff --git a/tools/testing/selftests/bpf/netcnt_common.h b/tools/testing/selftests/bpf/netcnt_common.h index 0ab1c88041cd..2d4a58e4e39c 100644 --- a/tools/testing/selftests/bpf/netcnt_common.h +++ b/tools/testing/selftests/bpf/netcnt_common.h @@ -8,11 +8,11 @@ /* sizeof(struct bpf_local_storage_elem): * - * It really is about 128 bytes on x86_64, but allocate more to account for - * possible layout changes, different architectures, etc. + * It is about 128 bytes on x86_64 and 512 bytes on s390x, but allocate more to + * account for possible layout changes, different architectures, etc. * The kernel will wrap up to PAGE_SIZE internally anyway. */ -#define SIZEOF_BPF_LOCAL_STORAGE_ELEM 256 +#define SIZEOF_BPF_LOCAL_STORAGE_ELEM 768 /* Try to estimate kernel's BPF_LOCAL_STORAGE_MAX_VALUE_SIZE: */ #define BPF_LOCAL_STORAGE_MAX_VALUE_SIZE (0xFFFF - \ diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c index 9566d9d2f6ee..56374c8b5436 100644 --- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c @@ -33,8 +33,8 @@ void test_attach_probe(void) struct test_attach_probe* skel; ssize_t uprobe_offset, ref_ctr_offset; struct bpf_link *uprobe_err_link; + FILE *devnull; bool legacy; - char *mem; /* Check if new-style kprobe/uprobe API is supported. * Kernels that support new FD-based kprobe and uprobe BPF attachment @@ -147,7 +147,7 @@ void test_attach_probe(void) /* test attach by name for a library function, using the library * as the binary argument. libc.so.6 will be resolved via dlopen()/dlinfo(). */ - uprobe_opts.func_name = "malloc"; + uprobe_opts.func_name = "fopen"; uprobe_opts.retprobe = false; skel->links.handle_uprobe_byname2 = bpf_program__attach_uprobe_opts(skel->progs.handle_uprobe_byname2, @@ -157,7 +157,7 @@ void test_attach_probe(void) if (!ASSERT_OK_PTR(skel->links.handle_uprobe_byname2, "attach_uprobe_byname2")) goto cleanup; - uprobe_opts.func_name = "free"; + uprobe_opts.func_name = "fclose"; uprobe_opts.retprobe = true; skel->links.handle_uretprobe_byname2 = bpf_program__attach_uprobe_opts(skel->progs.handle_uretprobe_byname2, @@ -195,8 +195,8 @@ void test_attach_probe(void) usleep(1); /* trigger & validate shared library u[ret]probes attached by name */ - mem = malloc(1); - free(mem); + devnull = fopen("/dev/null", "r"); + fclose(devnull); /* trigger & validate uprobe & uretprobe */ trigger_func(); diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c index 2be2d61954bc..26b2d1bffdfd 100644 --- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c +++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c @@ -472,6 +472,7 @@ static void lsm_subtest(struct test_bpf_cookie *skel) int prog_fd; int lsm_fd = -1; LIBBPF_OPTS(bpf_link_create_opts, link_opts); + int err; skel->bss->lsm_res = 0; @@ -482,8 +483,9 @@ static void lsm_subtest(struct test_bpf_cookie *skel) if (!ASSERT_GE(lsm_fd, 0, "lsm.link_create")) goto cleanup; - stack_mprotect(); - if (!ASSERT_EQ(errno, EPERM, "stack_mprotect")) + err = stack_mprotect(); + if (!ASSERT_EQ(err, -1, "stack_mprotect") || + !ASSERT_EQ(errno, EPERM, "stack_mprotect")) goto cleanup; usleep(1); diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c index 33a2776737e7..2cc759956e3b 100644 --- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c +++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c @@ -16,7 +16,7 @@ struct socket_cookie { __u64 cookie_key; - __u32 cookie_value; + __u64 cookie_value; }; static void test_tp_btf(int cgroup_fd) diff --git a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c index 0b2f73b88c53..2853883b7cbb 100644 --- a/tools/testing/selftests/bpf/prog_tests/decap_sanity.c +++ b/tools/testing/selftests/bpf/prog_tests/decap_sanity.c @@ -80,6 +80,6 @@ fail: bpf_tc_hook_destroy(&qdisc_hook); close_netns(nstoken); } - system("ip netns del " NS_TEST " >& /dev/null"); + system("ip netns del " NS_TEST " &> /dev/null"); decap_sanity__destroy(skel); } diff --git a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c index 5a7e6011f6bf..596536def43d 100644 --- a/tools/testing/selftests/bpf/prog_tests/fexit_stress.c +++ b/tools/testing/selftests/bpf/prog_tests/fexit_stress.c @@ -2,14 +2,19 @@ /* Copyright (c) 2019 Facebook */ #include <test_progs.h> -/* that's kernel internal BPF_MAX_TRAMP_PROGS define */ -#define CNT 38 - void serial_test_fexit_stress(void) { - int fexit_fd[CNT] = {}; - int link_fd[CNT] = {}; - int err, i; + int bpf_max_tramp_links, err, i; + int *fd, *fexit_fd, *link_fd; + + bpf_max_tramp_links = get_bpf_max_tramp_links(); + if (!ASSERT_GE(bpf_max_tramp_links, 1, "bpf_max_tramp_links")) + return; + fd = calloc(bpf_max_tramp_links * 2, sizeof(*fd)); + if (!ASSERT_OK_PTR(fd, "fd")) + return; + fexit_fd = fd; + link_fd = fd + bpf_max_tramp_links; const struct bpf_insn trace_program[] = { BPF_MOV64_IMM(BPF_REG_0, 0), @@ -28,7 +33,7 @@ void serial_test_fexit_stress(void) goto out; trace_opts.attach_btf_id = err; - for (i = 0; i < CNT; i++) { + for (i = 0; i < bpf_max_tramp_links; i++) { fexit_fd[i] = bpf_prog_load(BPF_PROG_TYPE_TRACING, NULL, "GPL", trace_program, sizeof(trace_program) / sizeof(struct bpf_insn), @@ -44,10 +49,11 @@ void serial_test_fexit_stress(void) ASSERT_OK(err, "bpf_prog_test_run_opts"); out: - for (i = 0; i < CNT; i++) { + for (i = 0; i < bpf_max_tramp_links; i++) { if (link_fd[i]) close(link_fd[i]); if (fexit_fd[i]) close(fexit_fd[i]); } + free(fd); } diff --git a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c index 73579370bfbd..c07991544a78 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfree_skb.c +++ b/tools/testing/selftests/bpf/prog_tests/kfree_skb.c @@ -36,7 +36,7 @@ static void on_sample(void *ctx, int cpu, void *data, __u32 size) "cb32_0 %x != %x\n", meta->cb32_0, cb.cb32[0])) return; - if (CHECK(pkt_v6->eth.h_proto != 0xdd86, "check_eth", + if (CHECK(pkt_v6->eth.h_proto != htons(ETH_P_IPV6), "check_eth", "h_proto %x\n", pkt_v6->eth.h_proto)) return; if (CHECK(pkt_v6->iph.nexthdr != 6, "check_ip", diff --git a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c index 5af1ee8f0e6e..a543742cd7bd 100644 --- a/tools/testing/selftests/bpf/prog_tests/kfunc_call.c +++ b/tools/testing/selftests/bpf/prog_tests/kfunc_call.c @@ -72,10 +72,12 @@ static struct kfunc_test_params kfunc_tests[] = { /* success cases */ TC_TEST(kfunc_call_test1, 12), TC_TEST(kfunc_call_test2, 3), + TC_TEST(kfunc_call_test4, -1234), TC_TEST(kfunc_call_test_ref_btf_id, 0), TC_TEST(kfunc_call_test_get_mem, 42), SYSCALL_TEST(kfunc_syscall_test, 0), SYSCALL_NULL_CTX_TEST(kfunc_syscall_test_null, 0), + TC_TEST(kfunc_call_test_static_unused_arg, 0), }; struct syscall_test_args { diff --git a/tools/testing/selftests/bpf/prog_tests/sk_assign.c b/tools/testing/selftests/bpf/prog_tests/sk_assign.c index 3e190ed63976..1374b626a985 100644 --- a/tools/testing/selftests/bpf/prog_tests/sk_assign.c +++ b/tools/testing/selftests/bpf/prog_tests/sk_assign.c @@ -29,7 +29,23 @@ static int stop, duration; static bool configure_stack(void) { + char tc_version[128]; char tc_cmd[BUFSIZ]; + char *prog; + FILE *tc; + + /* Check whether tc is built with libbpf. */ + tc = popen("tc -V", "r"); + if (CHECK_FAIL(!tc)) + return false; + if (CHECK_FAIL(!fgets(tc_version, sizeof(tc_version), tc))) + return false; + if (strstr(tc_version, ", libbpf ")) + prog = "test_sk_assign_libbpf.bpf.o"; + else + prog = "test_sk_assign.bpf.o"; + if (CHECK_FAIL(pclose(tc))) + return false; /* Move to a new networking namespace */ if (CHECK_FAIL(unshare(CLONE_NEWNET))) @@ -46,8 +62,8 @@ configure_stack(void) /* Load qdisc, BPF program */ if (CHECK_FAIL(system("tc qdisc add dev lo clsact"))) return false; - sprintf(tc_cmd, "%s %s %s %s", "tc filter add dev lo ingress bpf", - "direct-action object-file ./test_sk_assign.bpf.o", + sprintf(tc_cmd, "%s %s %s %s %s", "tc filter add dev lo ingress bpf", + "direct-action object-file", prog, "section tc", (env.verbosity < VERBOSE_VERY) ? " 2>/dev/null" : "verbose"); if (CHECK(system(tc_cmd), "BPF load failed;", @@ -129,15 +145,12 @@ get_port(int fd) static ssize_t rcv_msg(int srv_client, int type) { - struct sockaddr_storage ss; char buf[BUFSIZ]; - socklen_t slen; if (type == SOCK_STREAM) return read(srv_client, &buf, sizeof(buf)); else - return recvfrom(srv_client, &buf, sizeof(buf), 0, - (struct sockaddr *)&ss, &slen); + return recvfrom(srv_client, &buf, sizeof(buf), 0, NULL, NULL); } static int diff --git a/tools/testing/selftests/bpf/prog_tests/test_lsm.c b/tools/testing/selftests/bpf/prog_tests/test_lsm.c index 244c01125126..16175d579bc7 100644 --- a/tools/testing/selftests/bpf/prog_tests/test_lsm.c +++ b/tools/testing/selftests/bpf/prog_tests/test_lsm.c @@ -75,7 +75,8 @@ static int test_lsm(struct lsm *skel) skel->bss->monitored_pid = getpid(); err = stack_mprotect(); - if (!ASSERT_EQ(errno, EPERM, "stack_mprotect")) + if (!ASSERT_EQ(err, -1, "stack_mprotect") || + !ASSERT_EQ(errno, EPERM, "stack_mprotect")) return err; ASSERT_EQ(skel->bss->mprotect_count, 1, "mprotect_count"); diff --git a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c index 564b75bc087f..8fd4c0d78089 100644 --- a/tools/testing/selftests/bpf/prog_tests/trampoline_count.c +++ b/tools/testing/selftests/bpf/prog_tests/trampoline_count.c @@ -2,8 +2,6 @@ #define _GNU_SOURCE #include <test_progs.h> -#define MAX_TRAMP_PROGS 38 - struct inst { struct bpf_object *obj; struct bpf_link *link; @@ -37,14 +35,21 @@ void serial_test_trampoline_count(void) { char *file = "test_trampoline_count.bpf.o"; char *const progs[] = { "fentry_test", "fmod_ret_test", "fexit_test" }; - struct inst inst[MAX_TRAMP_PROGS + 1] = {}; + int bpf_max_tramp_links, err, i, prog_fd; struct bpf_program *prog; struct bpf_link *link; - int prog_fd, err, i; + struct inst *inst; LIBBPF_OPTS(bpf_test_run_opts, opts); + bpf_max_tramp_links = get_bpf_max_tramp_links(); + if (!ASSERT_GE(bpf_max_tramp_links, 1, "bpf_max_tramp_links")) + return; + inst = calloc(bpf_max_tramp_links + 1, sizeof(*inst)); + if (!ASSERT_OK_PTR(inst, "inst")) + return; + /* attach 'allowed' trampoline programs */ - for (i = 0; i < MAX_TRAMP_PROGS; i++) { + for (i = 0; i < bpf_max_tramp_links; i++) { prog = load_prog(file, progs[i % ARRAY_SIZE(progs)], &inst[i]); if (!prog) goto cleanup; @@ -91,4 +96,5 @@ cleanup: bpf_link__destroy(inst[i].link); bpf_object__close(inst[i].obj); } + free(inst); } diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c b/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c index 82807def0d24..6558c857e620 100644 --- a/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c +++ b/tools/testing/selftests/bpf/prog_tests/uprobe_autoattach.c @@ -16,10 +16,10 @@ static noinline int autoattach_trigger_func(int arg1, int arg2, int arg3, void test_uprobe_autoattach(void) { + const char *devnull_str = "/dev/null"; struct test_uprobe_autoattach *skel; int trigger_ret; - size_t malloc_sz = 1; - char *mem; + FILE *devnull; skel = test_uprobe_autoattach__open_and_load(); if (!ASSERT_OK_PTR(skel, "skel_open")) @@ -36,16 +36,18 @@ void test_uprobe_autoattach(void) skel->bss->test_pid = getpid(); /* trigger & validate shared library u[ret]probes attached by name */ - mem = malloc(malloc_sz); + devnull = fopen(devnull_str, "r"); ASSERT_EQ(skel->bss->uprobe_byname_parm1, 1, "check_uprobe_byname_parm1"); ASSERT_EQ(skel->bss->uprobe_byname_ran, 1, "check_uprobe_byname_ran"); ASSERT_EQ(skel->bss->uretprobe_byname_rc, trigger_ret, "check_uretprobe_byname_rc"); ASSERT_EQ(skel->bss->uretprobe_byname_ret, trigger_ret, "check_uretprobe_byname_ret"); ASSERT_EQ(skel->bss->uretprobe_byname_ran, 2, "check_uretprobe_byname_ran"); - ASSERT_EQ(skel->bss->uprobe_byname2_parm1, malloc_sz, "check_uprobe_byname2_parm1"); + ASSERT_EQ(skel->bss->uprobe_byname2_parm1, (__u64)(long)devnull_str, + "check_uprobe_byname2_parm1"); ASSERT_EQ(skel->bss->uprobe_byname2_ran, 3, "check_uprobe_byname2_ran"); - ASSERT_EQ(skel->bss->uretprobe_byname2_rc, mem, "check_uretprobe_byname2_rc"); + ASSERT_EQ(skel->bss->uretprobe_byname2_rc, (__u64)(long)devnull, + "check_uretprobe_byname2_rc"); ASSERT_EQ(skel->bss->uretprobe_byname2_ran, 4, "check_uretprobe_byname2_ran"); ASSERT_EQ(skel->bss->a[0], 1, "arg1"); @@ -67,7 +69,7 @@ void test_uprobe_autoattach(void) ASSERT_EQ(skel->bss->a[7], 8, "arg8"); #endif - free(mem); + fclose(devnull); cleanup: test_uprobe_autoattach__destroy(skel); } diff --git a/tools/testing/selftests/bpf/prog_tests/usdt.c b/tools/testing/selftests/bpf/prog_tests/usdt.c index 9ad9da0f215e..56ed1eb9b527 100644 --- a/tools/testing/selftests/bpf/prog_tests/usdt.c +++ b/tools/testing/selftests/bpf/prog_tests/usdt.c @@ -314,6 +314,7 @@ static FILE *urand_spawn(int *pid) if (fscanf(f, "%d", pid) != 1) { pclose(f); + errno = EINVAL; return NULL; } diff --git a/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c b/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c index 579d6ee83ce0..dd7f2bc70048 100644 --- a/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c +++ b/tools/testing/selftests/bpf/prog_tests/verify_pkcs7_sig.c @@ -61,6 +61,9 @@ static bool kfunc_not_supported; static int libbpf_print_cb(enum libbpf_print_level level, const char *fmt, va_list args) { + if (level == LIBBPF_WARN) + vprintf(fmt, args); + if (strcmp(fmt, "libbpf: extern (func ksym) '%s': not found in kernel or module BTFs\n")) return 0; diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c index 39973ea1ce43..f09505f8b038 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_adjust_tail.c @@ -76,10 +76,15 @@ static void test_xdp_adjust_tail_grow2(void) { const char *file = "./test_xdp_adjust_tail_grow.bpf.o"; char buf[4096]; /* avoid segfault: large buf to hold grow results */ - int tailroom = 320; /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info))*/; struct bpf_object *obj; int err, cnt, i; int max_grow, prog_fd; + /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) */ +#if defined(__s390x__) + int tailroom = 512; +#else + int tailroom = 320; +#endif LIBBPF_OPTS(bpf_test_run_opts, tattr, .repeat = 1, diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c index a50971c6cf4a..2666c84dbd01 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_do_redirect.c @@ -4,10 +4,12 @@ #include <net/if.h> #include <linux/if_ether.h> #include <linux/if_packet.h> +#include <linux/if_link.h> #include <linux/ipv6.h> #include <linux/in6.h> #include <linux/udp.h> #include <bpf/bpf_endian.h> +#include <uapi/linux/netdev.h> #include "test_xdp_do_redirect.skel.h" #define SYS(fmt, ...) \ @@ -65,7 +67,11 @@ static int attach_tc_prog(struct bpf_tc_hook *hook, int fd) /* The maximum permissible size is: PAGE_SIZE - sizeof(struct xdp_page_head) - * sizeof(struct skb_shared_info) - XDP_PACKET_HEADROOM = 3368 bytes */ +#if defined(__s390x__) +#define MAX_PKT_SIZE 3176 +#else #define MAX_PKT_SIZE 3368 +#endif static void test_max_pkt_size(int fd) { char data[MAX_PKT_SIZE + 1] = {}; @@ -92,7 +98,7 @@ void test_xdp_do_redirect(void) struct test_xdp_do_redirect *skel = NULL; struct nstoken *nstoken = NULL; struct bpf_link *link; - + LIBBPF_OPTS(bpf_xdp_query_opts, query_opts); struct xdp_md ctx_in = { .data = sizeof(__u32), .data_end = sizeof(data) }; DECLARE_LIBBPF_OPTS(bpf_test_run_opts, opts, @@ -153,6 +159,29 @@ void test_xdp_do_redirect(void) !ASSERT_NEQ(ifindex_dst, 0, "ifindex_dst")) goto out; + /* Check xdp features supported by veth driver */ + err = bpf_xdp_query(ifindex_src, XDP_FLAGS_DRV_MODE, &query_opts); + if (!ASSERT_OK(err, "veth_src bpf_xdp_query")) + goto out; + + if (!ASSERT_EQ(query_opts.feature_flags, + NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG, + "veth_src query_opts.feature_flags")) + goto out; + + err = bpf_xdp_query(ifindex_dst, XDP_FLAGS_DRV_MODE, &query_opts); + if (!ASSERT_OK(err, "veth_dst bpf_xdp_query")) + goto out; + + if (!ASSERT_EQ(query_opts.feature_flags, + NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_NDO_XMIT | NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_NDO_XMIT_SG, + "veth_dst query_opts.feature_flags")) + goto out; + memcpy(skel->rodata->expect_dst, &pkt_udp.eth.h_dest, ETH_ALEN); skel->rodata->ifindex_out = ifindex_src; /* redirect back to the same iface */ skel->rodata->ifindex_in = ifindex_src; diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_info.c b/tools/testing/selftests/bpf/prog_tests/xdp_info.c index cd3aa340e65e..286c21ecdc65 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_info.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_info.c @@ -8,6 +8,7 @@ void serial_test_xdp_info(void) { __u32 len = sizeof(struct bpf_prog_info), duration = 0, prog_id; const char *file = "./xdp_dummy.bpf.o"; + LIBBPF_OPTS(bpf_xdp_query_opts, opts); struct bpf_prog_info info = {}; struct bpf_object *obj; int err, prog_fd; @@ -61,6 +62,13 @@ void serial_test_xdp_info(void) if (CHECK(prog_id, "prog_id_drv", "unexpected prog_id=%u\n", prog_id)) goto out; + /* Check xdp features supported by lo device */ + opts.feature_flags = ~0; + err = bpf_xdp_query(IFINDEX_LO, XDP_FLAGS_DRV_MODE, &opts); + if (!ASSERT_OK(err, "bpf_xdp_query")) + goto out; + + ASSERT_EQ(opts.feature_flags, 0, "opts.feature_flags"); out: bpf_xdp_detach(IFINDEX_LO, 0, NULL); out_close: diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c index e033d48288c0..aa4beae99f4f 100644 --- a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c +++ b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c @@ -121,7 +121,7 @@ static void close_xsk(struct xsk *xsk) xsk_umem__delete(xsk->umem); if (xsk->socket) xsk_socket__delete(xsk->socket); - munmap(xsk->umem, UMEM_SIZE); + munmap(xsk->umem_area, UMEM_SIZE); } static void ip_csum(struct iphdr *iph) @@ -205,9 +205,8 @@ static void complete_tx(struct xsk *xsk) if (ASSERT_EQ(xsk_ring_cons__peek(&xsk->comp, 1, &idx), 1, "xsk_ring_cons__peek")) { addr = *xsk_ring_cons__comp_addr(&xsk->comp, idx); - printf("%p: refill idx=%u addr=%llx\n", xsk, idx, addr); - *xsk_ring_prod__fill_addr(&xsk->fill, idx) = addr; - xsk_ring_prod__submit(&xsk->fill, 1); + printf("%p: complete tx idx=%u addr=%llx\n", xsk, idx, addr); + xsk_ring_cons__release(&xsk->comp, 1); } } diff --git a/tools/testing/selftests/bpf/progs/kfunc_call_test.c b/tools/testing/selftests/bpf/progs/kfunc_call_test.c index f636e50be259..7daa8f5720b9 100644 --- a/tools/testing/selftests/bpf/progs/kfunc_call_test.c +++ b/tools/testing/selftests/bpf/progs/kfunc_call_test.c @@ -3,6 +3,7 @@ #include <vmlinux.h> #include <bpf/bpf_helpers.h> +extern long bpf_kfunc_call_test4(signed char a, short b, int c, long d) __ksym; extern int bpf_kfunc_call_test2(struct sock *sk, __u32 a, __u32 b) __ksym; extern __u64 bpf_kfunc_call_test1(struct sock *sk, __u32 a, __u64 b, __u32 c, __u64 d) __ksym; @@ -16,6 +17,24 @@ extern void bpf_kfunc_call_test_mem_len_pass1(void *mem, int len) __ksym; extern void bpf_kfunc_call_test_mem_len_fail2(__u64 *mem, int len) __ksym; extern int *bpf_kfunc_call_test_get_rdwr_mem(struct prog_test_ref_kfunc *p, const int rdwr_buf_size) __ksym; extern int *bpf_kfunc_call_test_get_rdonly_mem(struct prog_test_ref_kfunc *p, const int rdonly_buf_size) __ksym; +extern u32 bpf_kfunc_call_test_static_unused_arg(u32 arg, u32 unused) __ksym; + +SEC("tc") +int kfunc_call_test4(struct __sk_buff *skb) +{ + struct bpf_sock *sk = skb->sk; + long tmp; + + if (!sk) + return -1; + + sk = bpf_sk_fullsock(sk); + if (!sk) + return -1; + + tmp = bpf_kfunc_call_test4(-3, -30, -200, -1000); + return (tmp >> 32) + tmp; +} SEC("tc") int kfunc_call_test2(struct __sk_buff *skb) @@ -163,4 +182,14 @@ int kfunc_call_test_get_mem(struct __sk_buff *skb) return ret; } +SEC("tc") +int kfunc_call_test_static_unused_arg(struct __sk_buff *skb) +{ + + u32 expected = 5, actual; + + actual = bpf_kfunc_call_test_static_unused_arg(expected, 0xdeadbeef); + return actual != expected ? -1 : 0; +} + char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/lsm.c b/tools/testing/selftests/bpf/progs/lsm.c index d8d8af623bc2..dc93887ed34c 100644 --- a/tools/testing/selftests/bpf/progs/lsm.c +++ b/tools/testing/selftests/bpf/progs/lsm.c @@ -6,9 +6,10 @@ #include "bpf_misc.h" #include "vmlinux.h" +#include <bpf/bpf_core_read.h> #include <bpf/bpf_helpers.h> #include <bpf/bpf_tracing.h> -#include <errno.h> +#include <errno.h> struct { __uint(type, BPF_MAP_TYPE_ARRAY); @@ -164,8 +165,8 @@ int copy_test = 0; SEC("fentry.s/" SYS_PREFIX "sys_setdomainname") int BPF_PROG(test_sys_setdomainname, struct pt_regs *regs) { - void *ptr = (void *)PT_REGS_PARM1(regs); - int len = PT_REGS_PARM2(regs); + void *ptr = (void *)PT_REGS_PARM1_SYSCALL(regs); + int len = PT_REGS_PARM2_SYSCALL(regs); int buf = 0; long ret; diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h index 92331053dba3..68a3fd7387a4 100644 --- a/tools/testing/selftests/bpf/progs/profiler.inc.h +++ b/tools/testing/selftests/bpf/progs/profiler.inc.h @@ -156,10 +156,10 @@ probe_read_lim(void* dst, void* src, unsigned long len, unsigned long max) { len = len < max ? len : max; if (len > 1) { - if (bpf_probe_read(dst, len, src)) + if (bpf_probe_read_kernel(dst, len, src)) return 0; } else if (len == 1) { - if (bpf_probe_read(dst, 1, src)) + if (bpf_probe_read_kernel(dst, 1, src)) return 0; } return len; @@ -216,7 +216,8 @@ static INLINE void* read_full_cgroup_path(struct kernfs_node* cgroup_node, #endif for (int i = 0; i < MAX_CGROUPS_PATH_DEPTH; i++) { filepart_length = - bpf_probe_read_str(payload, MAX_PATH, BPF_CORE_READ(cgroup_node, name)); + bpf_probe_read_kernel_str(payload, MAX_PATH, + BPF_CORE_READ(cgroup_node, name)); if (!cgroup_node) return payload; if (cgroup_node == cgroup_root_node) @@ -303,7 +304,8 @@ static INLINE void* populate_cgroup_info(struct cgroup_data_t* cgroup_data, cgroup_data->cgroup_full_length = 0; size_t cgroup_root_length = - bpf_probe_read_str(payload, MAX_PATH, BPF_CORE_READ(root_kernfs, name)); + bpf_probe_read_kernel_str(payload, MAX_PATH, + BPF_CORE_READ(root_kernfs, name)); barrier_var(cgroup_root_length); if (cgroup_root_length <= MAX_PATH) { barrier_var(cgroup_root_length); @@ -312,7 +314,8 @@ static INLINE void* populate_cgroup_info(struct cgroup_data_t* cgroup_data, } size_t cgroup_proc_length = - bpf_probe_read_str(payload, MAX_PATH, BPF_CORE_READ(proc_kernfs, name)); + bpf_probe_read_kernel_str(payload, MAX_PATH, + BPF_CORE_READ(proc_kernfs, name)); barrier_var(cgroup_proc_length); if (cgroup_proc_length <= MAX_PATH) { barrier_var(cgroup_proc_length); @@ -395,7 +398,8 @@ static INLINE int trace_var_sys_kill(void* ctx, int tpid, int sig) arr_struct = bpf_map_lookup_elem(&data_heap, &zero); if (arr_struct == NULL) return 0; - bpf_probe_read(&arr_struct->array[0], sizeof(arr_struct->array[0]), kill_data); + bpf_probe_read_kernel(&arr_struct->array[0], + sizeof(arr_struct->array[0]), kill_data); } else { int index = get_var_spid_index(arr_struct, spid); @@ -409,8 +413,9 @@ static INLINE int trace_var_sys_kill(void* ctx, int tpid, int sig) #endif for (int i = 0; i < ARRAY_SIZE(arr_struct->array); i++) if (arr_struct->array[i].meta.pid == 0) { - bpf_probe_read(&arr_struct->array[i], - sizeof(arr_struct->array[i]), kill_data); + bpf_probe_read_kernel(&arr_struct->array[i], + sizeof(arr_struct->array[i]), + kill_data); bpf_map_update_elem(&var_tpid_to_data, &tpid, arr_struct, 0); @@ -427,17 +432,17 @@ static INLINE int trace_var_sys_kill(void* ctx, int tpid, int sig) if (delta_sec < STALE_INFO) { kill_data->kill_count++; kill_data->last_kill_time = bpf_ktime_get_ns(); - bpf_probe_read(&arr_struct->array[index], - sizeof(arr_struct->array[index]), - kill_data); + bpf_probe_read_kernel(&arr_struct->array[index], + sizeof(arr_struct->array[index]), + kill_data); } else { struct var_kill_data_t* kill_data = get_var_kill_data(ctx, spid, tpid, sig); if (kill_data == NULL) return 0; - bpf_probe_read(&arr_struct->array[index], - sizeof(arr_struct->array[index]), - kill_data); + bpf_probe_read_kernel(&arr_struct->array[index], + sizeof(arr_struct->array[index]), + kill_data); } } bpf_map_update_elem(&var_tpid_to_data, &tpid, arr_struct, 0); @@ -487,8 +492,9 @@ read_absolute_file_path_from_dentry(struct dentry* filp_dentry, void* payload) #pragma unroll #endif for (int i = 0; i < MAX_PATH_DEPTH; i++) { - filepart_length = bpf_probe_read_str(payload, MAX_PATH, - BPF_CORE_READ(filp_dentry, d_name.name)); + filepart_length = + bpf_probe_read_kernel_str(payload, MAX_PATH, + BPF_CORE_READ(filp_dentry, d_name.name)); barrier_var(filepart_length); if (filepart_length > MAX_PATH) break; @@ -572,7 +578,8 @@ ssize_t BPF_KPROBE(kprobe__proc_sys_write, sysctl_data->sysctl_val_length = 0; sysctl_data->sysctl_path_length = 0; - size_t sysctl_val_length = bpf_probe_read_str(payload, CTL_MAXNAME, buf); + size_t sysctl_val_length = bpf_probe_read_kernel_str(payload, + CTL_MAXNAME, buf); barrier_var(sysctl_val_length); if (sysctl_val_length <= CTL_MAXNAME) { barrier_var(sysctl_val_length); @@ -580,8 +587,10 @@ ssize_t BPF_KPROBE(kprobe__proc_sys_write, payload += sysctl_val_length; } - size_t sysctl_path_length = bpf_probe_read_str(payload, MAX_PATH, - BPF_CORE_READ(filp, f_path.dentry, d_name.name)); + size_t sysctl_path_length = + bpf_probe_read_kernel_str(payload, MAX_PATH, + BPF_CORE_READ(filp, f_path.dentry, + d_name.name)); barrier_var(sysctl_path_length); if (sysctl_path_length <= MAX_PATH) { barrier_var(sysctl_path_length); @@ -638,7 +647,8 @@ int raw_tracepoint__sched_process_exit(void* ctx) struct var_kill_data_t* past_kill_data = &arr_struct->array[i]; if (past_kill_data != NULL && past_kill_data->kill_target_pid == tpid) { - bpf_probe_read(kill_data, sizeof(*past_kill_data), past_kill_data); + bpf_probe_read_kernel(kill_data, sizeof(*past_kill_data), + past_kill_data); void* payload = kill_data->payload; size_t offset = kill_data->payload_length; if (offset >= MAX_METADATA_PAYLOAD_LEN + MAX_CGROUP_PAYLOAD_LEN) @@ -656,8 +666,10 @@ int raw_tracepoint__sched_process_exit(void* ctx) payload += comm_length; } - size_t cgroup_proc_length = bpf_probe_read_str(payload, KILL_TARGET_LEN, - BPF_CORE_READ(proc_kernfs, name)); + size_t cgroup_proc_length = + bpf_probe_read_kernel_str(payload, + KILL_TARGET_LEN, + BPF_CORE_READ(proc_kernfs, name)); barrier_var(cgroup_proc_length); if (cgroup_proc_length <= KILL_TARGET_LEN) { barrier_var(cgroup_proc_length); @@ -718,7 +730,8 @@ int raw_tracepoint__sched_process_exec(struct bpf_raw_tracepoint_args* ctx) proc_exec_data->parent_start_time = BPF_CORE_READ(parent_task, start_time); const char* filename = BPF_CORE_READ(bprm, filename); - size_t bin_path_length = bpf_probe_read_str(payload, MAX_FILENAME_LEN, filename); + size_t bin_path_length = + bpf_probe_read_kernel_str(payload, MAX_FILENAME_LEN, filename); barrier_var(bin_path_length); if (bin_path_length <= MAX_FILENAME_LEN) { barrier_var(bin_path_length); @@ -922,7 +935,8 @@ int BPF_KPROBE(kprobe__vfs_symlink, struct inode* dir, struct dentry* dentry, filemod_data->payload); payload = populate_cgroup_info(&filemod_data->cgroup_data, task, payload); - size_t len = bpf_probe_read_str(payload, MAX_FILEPATH_LENGTH, oldname); + size_t len = bpf_probe_read_kernel_str(payload, MAX_FILEPATH_LENGTH, + oldname); barrier_var(len); if (len <= MAX_FILEPATH_LENGTH) { barrier_var(len); diff --git a/tools/testing/selftests/bpf/progs/test_attach_probe.c b/tools/testing/selftests/bpf/progs/test_attach_probe.c index a1e45fec8938..3b5dc34d23e9 100644 --- a/tools/testing/selftests/bpf/progs/test_attach_probe.c +++ b/tools/testing/selftests/bpf/progs/test_attach_probe.c @@ -92,18 +92,19 @@ int handle_uretprobe_byname(struct pt_regs *ctx) } SEC("uprobe") -int handle_uprobe_byname2(struct pt_regs *ctx) +int BPF_UPROBE(handle_uprobe_byname2, const char *pathname, const char *mode) { - unsigned int size = PT_REGS_PARM1(ctx); + char mode_buf[2] = {}; - /* verify malloc size */ - if (size == 1) + /* verify fopen mode */ + bpf_probe_read_user(mode_buf, sizeof(mode_buf), mode); + if (mode_buf[0] == 'r' && mode_buf[1] == 0) uprobe_byname2_res = 7; return 0; } SEC("uretprobe") -int handle_uretprobe_byname2(struct pt_regs *ctx) +int BPF_URETPROBE(handle_uretprobe_byname2, void *ret) { uretprobe_byname2_res = 8; return 0; diff --git a/tools/testing/selftests/bpf/progs/test_sk_assign.c b/tools/testing/selftests/bpf/progs/test_sk_assign.c index 98c6493d9b91..21b19b758c4e 100644 --- a/tools/testing/selftests/bpf/progs/test_sk_assign.c +++ b/tools/testing/selftests/bpf/progs/test_sk_assign.c @@ -16,6 +16,16 @@ #include <bpf/bpf_helpers.h> #include <bpf/bpf_endian.h> +#if defined(IPROUTE2_HAVE_LIBBPF) +/* Use a new-style map definition. */ +struct { + __uint(type, BPF_MAP_TYPE_SOCKMAP); + __type(key, int); + __type(value, __u64); + __uint(pinning, LIBBPF_PIN_BY_NAME); + __uint(max_entries, 1); +} server_map SEC(".maps"); +#else /* Pin map under /sys/fs/bpf/tc/globals/<map name> */ #define PIN_GLOBAL_NS 2 @@ -35,6 +45,7 @@ struct { .max_elem = 1, .pinning = PIN_GLOBAL_NS, }; +#endif char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c b/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c new file mode 100644 index 000000000000..dcf46adfda04 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/test_sk_assign_libbpf.c @@ -0,0 +1,3 @@ +// SPDX-License-Identifier: GPL-2.0 +#define IPROUTE2_HAVE_LIBBPF +#include "test_sk_assign.c" diff --git a/tools/testing/selftests/bpf/progs/test_uprobe_autoattach.c b/tools/testing/selftests/bpf/progs/test_uprobe_autoattach.c index 774ddeb45898..da4bf89d004c 100644 --- a/tools/testing/selftests/bpf/progs/test_uprobe_autoattach.c +++ b/tools/testing/selftests/bpf/progs/test_uprobe_autoattach.c @@ -13,9 +13,9 @@ int uprobe_byname_ran = 0; int uretprobe_byname_rc = 0; int uretprobe_byname_ret = 0; int uretprobe_byname_ran = 0; -size_t uprobe_byname2_parm1 = 0; +u64 uprobe_byname2_parm1 = 0; int uprobe_byname2_ran = 0; -char *uretprobe_byname2_rc = NULL; +u64 uretprobe_byname2_rc = 0; int uretprobe_byname2_ran = 0; int test_pid; @@ -88,28 +88,28 @@ int BPF_URETPROBE(handle_uretprobe_byname, int ret) } -SEC("uprobe/libc.so.6:malloc") -int handle_uprobe_byname2(struct pt_regs *ctx) +SEC("uprobe/libc.so.6:fopen") +int BPF_UPROBE(handle_uprobe_byname2, const char *pathname, const char *mode) { int pid = bpf_get_current_pid_tgid() >> 32; /* ignore irrelevant invocations */ if (test_pid != pid) return 0; - uprobe_byname2_parm1 = PT_REGS_PARM1_CORE(ctx); + uprobe_byname2_parm1 = (u64)(long)pathname; uprobe_byname2_ran = 3; return 0; } -SEC("uretprobe/libc.so.6:malloc") -int handle_uretprobe_byname2(struct pt_regs *ctx) +SEC("uretprobe/libc.so.6:fopen") +int BPF_URETPROBE(handle_uretprobe_byname2, void *ret) { int pid = bpf_get_current_pid_tgid() >> 32; /* ignore irrelevant invocations */ if (test_pid != pid) return 0; - uretprobe_byname2_rc = (char *)PT_REGS_RC_CORE(ctx); + uretprobe_byname2_rc = (u64)(long)ret; uretprobe_byname2_ran = 4; return 0; } diff --git a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c index ce419304ff1f..7748cc23de8a 100644 --- a/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c +++ b/tools/testing/selftests/bpf/progs/test_verify_pkcs7_sig.c @@ -59,10 +59,14 @@ int BPF_PROG(bpf, int cmd, union bpf_attr *attr, unsigned int size) if (!data_val) return 0; - bpf_probe_read(&value, sizeof(value), &attr->value); - - bpf_copy_from_user(data_val, sizeof(struct data), - (void *)(unsigned long)value); + ret = bpf_probe_read_kernel(&value, sizeof(value), &attr->value); + if (ret) + return ret; + + ret = bpf_copy_from_user(data_val, sizeof(struct data), + (void *)(unsigned long)value); + if (ret) + return ret; if (data_val->data_len > sizeof(data_val->data)) return -EINVAL; diff --git a/tools/testing/selftests/bpf/progs/test_vmlinux.c b/tools/testing/selftests/bpf/progs/test_vmlinux.c index e9dfa0313d1b..4b8e37f7fd06 100644 --- a/tools/testing/selftests/bpf/progs/test_vmlinux.c +++ b/tools/testing/selftests/bpf/progs/test_vmlinux.c @@ -42,7 +42,7 @@ int BPF_PROG(handle__raw_tp, struct pt_regs *regs, long id) if (id != __NR_nanosleep) return 0; - ts = (void *)PT_REGS_PARM1_CORE(regs); + ts = (void *)PT_REGS_PARM1_CORE_SYSCALL(regs); if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) || tv_nsec != MY_TV_NSEC) return 0; @@ -60,7 +60,7 @@ int BPF_PROG(handle__tp_btf, struct pt_regs *regs, long id) if (id != __NR_nanosleep) return 0; - ts = (void *)PT_REGS_PARM1_CORE(regs); + ts = (void *)PT_REGS_PARM1_CORE_SYSCALL(regs); if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) || tv_nsec != MY_TV_NSEC) return 0; diff --git a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c index 53b64c999450..297c260fc364 100644 --- a/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c +++ b/tools/testing/selftests/bpf/progs/test_xdp_adjust_tail_grow.c @@ -9,6 +9,12 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp) void *data = (void *)(long)xdp->data; int data_len = bpf_xdp_get_buff_len(xdp); int offset = 0; + /* SKB_DATA_ALIGN(sizeof(struct skb_shared_info)) */ +#if defined(__TARGET_ARCH_s390) + int tailroom = 512; +#else + int tailroom = 320; +#endif /* Data length determine test case */ @@ -20,7 +26,7 @@ int _xdp_adjust_tail_grow(struct xdp_md *xdp) offset = 128; } else if (data_len == 128) { /* Max tail grow 3520 */ - offset = 4096 - 256 - 320 - data_len; + offset = 4096 - 256 - tailroom - data_len; } else if (data_len == 9000) { offset = 10; } else if (data_len == 9001) { diff --git a/tools/testing/selftests/bpf/progs/xdp_features.c b/tools/testing/selftests/bpf/progs/xdp_features.c new file mode 100644 index 000000000000..87c247d56f72 --- /dev/null +++ b/tools/testing/selftests/bpf/progs/xdp_features.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include <stdbool.h> +#include <linux/bpf.h> +#include <linux/netdev.h> +#include <bpf/bpf_helpers.h> +#include <bpf/bpf_endian.h> +#include <bpf/bpf_tracing.h> +#include <linux/if_ether.h> +#include <linux/ip.h> +#include <linux/ipv6.h> +#include <linux/in.h> +#include <linux/in6.h> +#include <linux/udp.h> +#include <asm-generic/errno-base.h> + +#include "xdp_features.h" + +#define ipv6_addr_equal(a, b) ((a).s6_addr32[0] == (b).s6_addr32[0] && \ + (a).s6_addr32[1] == (b).s6_addr32[1] && \ + (a).s6_addr32[2] == (b).s6_addr32[2] && \ + (a).s6_addr32[3] == (b).s6_addr32[3]) + +struct net_device; +struct bpf_prog; + +struct xdp_cpumap_stats { + unsigned int redirect; + unsigned int pass; + unsigned int drop; +}; + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, __u32); + __type(value, __u32); + __uint(max_entries, 1); +} stats SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __type(key, __u32); + __type(value, __u32); + __uint(max_entries, 1); +} dut_stats SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_CPUMAP); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(struct bpf_cpumap_val)); + __uint(max_entries, 1); +} cpu_map SEC(".maps"); + +struct { + __uint(type, BPF_MAP_TYPE_DEVMAP); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(struct bpf_devmap_val)); + __uint(max_entries, 1); +} dev_map SEC(".maps"); + +const volatile struct in6_addr tester_addr; +const volatile struct in6_addr dut_addr; + +static __always_inline int +xdp_process_echo_packet(struct xdp_md *xdp, bool dut) +{ + void *data_end = (void *)(long)xdp->data_end; + void *data = (void *)(long)xdp->data; + struct ethhdr *eh = data; + struct tlv_hdr *tlv; + struct udphdr *uh; + __be16 port; + __u8 *cmd; + + if (eh + 1 > (struct ethhdr *)data_end) + return -EINVAL; + + if (eh->h_proto == bpf_htons(ETH_P_IP)) { + struct iphdr *ih = (struct iphdr *)(eh + 1); + __be32 saddr = dut ? tester_addr.s6_addr32[3] + : dut_addr.s6_addr32[3]; + __be32 daddr = dut ? dut_addr.s6_addr32[3] + : tester_addr.s6_addr32[3]; + + ih = (struct iphdr *)(eh + 1); + if (ih + 1 > (struct iphdr *)data_end) + return -EINVAL; + + if (saddr != ih->saddr) + return -EINVAL; + + if (daddr != ih->daddr) + return -EINVAL; + + if (ih->protocol != IPPROTO_UDP) + return -EINVAL; + + uh = (struct udphdr *)(ih + 1); + } else if (eh->h_proto == bpf_htons(ETH_P_IPV6)) { + struct in6_addr saddr = dut ? tester_addr : dut_addr; + struct in6_addr daddr = dut ? dut_addr : tester_addr; + struct ipv6hdr *ih6 = (struct ipv6hdr *)(eh + 1); + + if (ih6 + 1 > (struct ipv6hdr *)data_end) + return -EINVAL; + + if (!ipv6_addr_equal(saddr, ih6->saddr)) + return -EINVAL; + + if (!ipv6_addr_equal(daddr, ih6->daddr)) + return -EINVAL; + + if (ih6->nexthdr != IPPROTO_UDP) + return -EINVAL; + + uh = (struct udphdr *)(ih6 + 1); + } else { + return -EINVAL; + } + + if (uh + 1 > (struct udphdr *)data_end) + return -EINVAL; + + port = dut ? uh->dest : uh->source; + if (port != bpf_htons(DUT_ECHO_PORT)) + return -EINVAL; + + tlv = (struct tlv_hdr *)(uh + 1); + if (tlv + 1 > data_end) + return -EINVAL; + + return bpf_htons(tlv->type) == CMD_ECHO ? 0 : -EINVAL; +} + +static __always_inline int +xdp_update_stats(struct xdp_md *xdp, bool tx, bool dut) +{ + __u32 *val, key = 0; + + if (xdp_process_echo_packet(xdp, tx)) + return -EINVAL; + + if (dut) + val = bpf_map_lookup_elem(&dut_stats, &key); + else + val = bpf_map_lookup_elem(&stats, &key); + + if (val) + __sync_add_and_fetch(val, 1); + + return 0; +} + +/* Tester */ + +SEC("xdp") +int xdp_tester_check_tx(struct xdp_md *xdp) +{ + xdp_update_stats(xdp, true, false); + + return XDP_PASS; +} + +SEC("xdp") +int xdp_tester_check_rx(struct xdp_md *xdp) +{ + xdp_update_stats(xdp, false, false); + + return XDP_PASS; +} + +/* DUT */ + +SEC("xdp") +int xdp_do_pass(struct xdp_md *xdp) +{ + xdp_update_stats(xdp, true, true); + + return XDP_PASS; +} + +SEC("xdp") +int xdp_do_drop(struct xdp_md *xdp) +{ + if (xdp_update_stats(xdp, true, true)) + return XDP_PASS; + + return XDP_DROP; +} + +SEC("xdp") +int xdp_do_aborted(struct xdp_md *xdp) +{ + if (xdp_process_echo_packet(xdp, true)) + return XDP_PASS; + + return XDP_ABORTED; +} + +SEC("xdp") +int xdp_do_tx(struct xdp_md *xdp) +{ + void *data = (void *)(long)xdp->data; + struct ethhdr *eh = data; + __u8 tmp_mac[ETH_ALEN]; + + if (xdp_update_stats(xdp, true, true)) + return XDP_PASS; + + __builtin_memcpy(tmp_mac, eh->h_source, ETH_ALEN); + __builtin_memcpy(eh->h_source, eh->h_dest, ETH_ALEN); + __builtin_memcpy(eh->h_dest, tmp_mac, ETH_ALEN); + + return XDP_TX; +} + +SEC("xdp") +int xdp_do_redirect(struct xdp_md *xdp) +{ + if (xdp_process_echo_packet(xdp, true)) + return XDP_PASS; + + return bpf_redirect_map(&cpu_map, 0, 0); +} + +SEC("tp_btf/xdp_exception") +int BPF_PROG(xdp_exception, const struct net_device *dev, + const struct bpf_prog *xdp, __u32 act) +{ + __u32 *val, key = 0; + + val = bpf_map_lookup_elem(&dut_stats, &key); + if (val) + __sync_add_and_fetch(val, 1); + + return 0; +} + +SEC("tp_btf/xdp_cpumap_kthread") +int BPF_PROG(tp_xdp_cpumap_kthread, int map_id, unsigned int processed, + unsigned int drops, int sched, struct xdp_cpumap_stats *xdp_stats) +{ + __u32 *val, key = 0; + + val = bpf_map_lookup_elem(&dut_stats, &key); + if (val) + __sync_add_and_fetch(val, 1); + + return 0; +} + +SEC("xdp/cpumap") +int xdp_do_redirect_cpumap(struct xdp_md *xdp) +{ + void *data = (void *)(long)xdp->data; + struct ethhdr *eh = data; + __u8 tmp_mac[ETH_ALEN]; + + if (xdp_process_echo_packet(xdp, true)) + return XDP_PASS; + + __builtin_memcpy(tmp_mac, eh->h_source, ETH_ALEN); + __builtin_memcpy(eh->h_source, eh->h_dest, ETH_ALEN); + __builtin_memcpy(eh->h_dest, tmp_mac, ETH_ALEN); + + return bpf_redirect_map(&dev_map, 0, 0); +} + +char _license[] SEC("license") = "GPL"; diff --git a/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c b/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c index 25b8178735ee..4c55b4d79d3d 100644 --- a/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c +++ b/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c @@ -70,10 +70,14 @@ int rx(struct xdp_md *ctx) } if (!bpf_xdp_metadata_rx_timestamp(ctx, &meta->rx_timestamp)) - bpf_printk("populated rx_timestamp with %u", meta->rx_timestamp); + bpf_printk("populated rx_timestamp with %llu", meta->rx_timestamp); + else + meta->rx_timestamp = 0; /* Used by AF_XDP as not avail signal */ if (!bpf_xdp_metadata_rx_hash(ctx, &meta->rx_hash)) bpf_printk("populated rx_hash with %u", meta->rx_hash); + else + meta->rx_hash = 0; /* Used by AF_XDP as not avail signal */ return bpf_redirect_map(&xsk, ctx->rx_queue_index, XDP_PASS); } diff --git a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c index 736686e903f6..07d786329105 100644 --- a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c +++ b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c @@ -310,7 +310,7 @@ static __always_inline void values_get_tcpipopts(__u16 *mss, __u8 *wscale, static __always_inline void values_inc_synacks(void) { __u32 key = 1; - __u32 *value; + __u64 *value; value = bpf_map_lookup_elem(&values, &key); if (value) diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c index c5f852163246..6d5e3022c75f 100644 --- a/tools/testing/selftests/bpf/test_progs.c +++ b/tools/testing/selftests/bpf/test_progs.c @@ -17,6 +17,7 @@ #include <sys/select.h> #include <sys/socket.h> #include <sys/un.h> +#include <bpf/btf.h> static bool verbose(void) { @@ -967,6 +968,43 @@ int write_sysctl(const char *sysctl, const char *value) return 0; } +int get_bpf_max_tramp_links_from(struct btf *btf) +{ + const struct btf_enum *e; + const struct btf_type *t; + __u32 i, type_cnt; + const char *name; + __u16 j, vlen; + + for (i = 1, type_cnt = btf__type_cnt(btf); i < type_cnt; i++) { + t = btf__type_by_id(btf, i); + if (!t || !btf_is_enum(t) || t->name_off) + continue; + e = btf_enum(t); + for (j = 0, vlen = btf_vlen(t); j < vlen; j++, e++) { + name = btf__str_by_offset(btf, e->name_off); + if (name && !strcmp(name, "BPF_MAX_TRAMP_LINKS")) + return e->val; + } + } + + return -1; +} + +int get_bpf_max_tramp_links(void) +{ + struct btf *vmlinux_btf; + int ret; + + vmlinux_btf = btf__load_vmlinux_btf(); + if (!ASSERT_OK_PTR(vmlinux_btf, "vmlinux btf")) + return -1; + ret = get_bpf_max_tramp_links_from(vmlinux_btf); + btf__free(vmlinux_btf); + + return ret; +} + #define MAX_BACKTRACE_SZ 128 void crash_handler(int signum) { diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h index 3f058dfadbaf..d5d51ec97ec8 100644 --- a/tools/testing/selftests/bpf/test_progs.h +++ b/tools/testing/selftests/bpf/test_progs.h @@ -394,6 +394,8 @@ int kern_sync_rcu(void); int trigger_module_test_read(int read_sz); int trigger_module_test_write(int write_sz); int write_sysctl(const char *sysctl, const char *value); +int get_bpf_max_tramp_links_from(struct btf *btf); +int get_bpf_max_tramp_links(void); #ifdef __x86_64__ #define SYS_NANOSLEEP_KPROBE_NAME "__x64_sys_nanosleep" diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c index 8c808551dfd7..887c49dc5abd 100644 --- a/tools/testing/selftests/bpf/test_verifier.c +++ b/tools/testing/selftests/bpf/test_verifier.c @@ -209,7 +209,7 @@ loop: insn[i++] = BPF_MOV64_IMM(BPF_REG_2, 1); insn[i++] = BPF_MOV64_IMM(BPF_REG_3, 2); insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_skb_vlan_push), + BPF_FUNC_skb_vlan_push); insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3); i++; } @@ -220,7 +220,7 @@ loop: i++; insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6); insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, - BPF_FUNC_skb_vlan_pop), + BPF_FUNC_skb_vlan_pop); insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3); i++; } diff --git a/tools/testing/selftests/bpf/test_xdp_features.sh b/tools/testing/selftests/bpf/test_xdp_features.sh new file mode 100755 index 000000000000..0aa71c4455c0 --- /dev/null +++ b/tools/testing/selftests/bpf/test_xdp_features.sh @@ -0,0 +1,107 @@ +#!/bin/bash +# SPDX-License-Identifier: GPL-2.0 + +readonly NS="ns1-$(mktemp -u XXXXXX)" +readonly V0_IP4=10.10.0.11 +readonly V1_IP4=10.10.0.1 +readonly V0_IP6=2001:db8::11 +readonly V1_IP6=2001:db8::1 + +ret=1 + +setup() { + { + ip netns add ${NS} + + ip link add v1 type veth peer name v0 netns ${NS} + + ip link set v1 up + ip addr add $V1_IP4/24 dev v1 + ip addr add $V1_IP6/64 nodad dev v1 + ip -n ${NS} link set dev v0 up + ip -n ${NS} addr add $V0_IP4/24 dev v0 + ip -n ${NS} addr add $V0_IP6/64 nodad dev v0 + + # Enable XDP mode and disable checksum offload + ethtool -K v1 gro on + ethtool -K v1 tx-checksumming off + ip netns exec ${NS} ethtool -K v0 gro on + ip netns exec ${NS} ethtool -K v0 tx-checksumming off + } > /dev/null 2>&1 +} + +cleanup() { + ip link del v1 2> /dev/null + ip netns del ${NS} 2> /dev/null + [ "$(pidof xdp_features)" = "" ] || kill $(pidof xdp_features) 2> /dev/null +} + +wait_for_dut_server() { + while sleep 1; do + ss -tlp | grep -q xdp_features + [ $? -eq 0 ] && break + done +} + +test_xdp_features() { + setup + + ## XDP_PASS + ./xdp_features -f XDP_PASS -D $V1_IP6 -T $V0_IP6 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_PASS \ + -D $V1_IP6 -C $V1_IP6 \ + -T $V0_IP6 v0 + [ $? -ne 0 ] && exit + + ## XDP_DROP + ./xdp_features -f XDP_DROP -D ::ffff:$V1_IP4 -T ::ffff:$V0_IP4 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_DROP \ + -D ::ffff:$V1_IP4 \ + -C ::ffff:$V1_IP4 \ + -T ::ffff:$V0_IP4 v0 + [ $? -ne 0 ] && exit + + ## XDP_ABORTED + ./xdp_features -f XDP_ABORTED -D $V1_IP6 -T $V0_IP6 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_ABORTED \ + -D $V1_IP6 -C $V1_IP6 \ + -T $V0_IP6 v0 + [ $? -ne 0 ] && exit + + ## XDP_TX + ./xdp_features -f XDP_TX -D ::ffff:$V1_IP4 -T ::ffff:$V0_IP4 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_TX \ + -D ::ffff:$V1_IP4 \ + -C ::ffff:$V1_IP4 \ + -T ::ffff:$V0_IP4 v0 + [ $? -ne 0 ] && exit + + ## XDP_REDIRECT + ./xdp_features -f XDP_REDIRECT -D $V1_IP6 -T $V0_IP6 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_REDIRECT \ + -D $V1_IP6 -C $V1_IP6 \ + -T $V0_IP6 v0 + [ $? -ne 0 ] && exit + + ## XDP_NDO_XMIT + ./xdp_features -f XDP_NDO_XMIT -D ::ffff:$V1_IP4 -T ::ffff:$V0_IP4 v1 & + wait_for_dut_server + ip netns exec ${NS} ./xdp_features -t -f XDP_NDO_XMIT \ + -D ::ffff:$V1_IP4 \ + -C ::ffff:$V1_IP4 \ + -T ::ffff:$V0_IP4 v0 + ret=$? + cleanup +} + +set -e +trap cleanup 2 3 6 9 + +test_xdp_features + +exit $ret diff --git a/tools/testing/selftests/bpf/vmtest.sh b/tools/testing/selftests/bpf/vmtest.sh index 316a56d680f2..685034528018 100755 --- a/tools/testing/selftests/bpf/vmtest.sh +++ b/tools/testing/selftests/bpf/vmtest.sh @@ -13,7 +13,7 @@ s390x) QEMU_BINARY=qemu-system-s390x QEMU_CONSOLE="ttyS1" QEMU_FLAGS=(-smp 2) - BZIMAGE="arch/s390/boot/compressed/vmlinux" + BZIMAGE="arch/s390/boot/vmlinux" ;; x86_64) QEMU_BINARY=qemu-system-x86_64 diff --git a/tools/testing/selftests/bpf/xdp_features.c b/tools/testing/selftests/bpf/xdp_features.c new file mode 100644 index 000000000000..fce12165213b --- /dev/null +++ b/tools/testing/selftests/bpf/xdp_features.c @@ -0,0 +1,699 @@ +// SPDX-License-Identifier: GPL-2.0 +#include <uapi/linux/bpf.h> +#include <uapi/linux/netdev.h> +#include <linux/if_link.h> +#include <signal.h> +#include <argp.h> +#include <net/if.h> +#include <sys/socket.h> +#include <netinet/in.h> +#include <netinet/tcp.h> +#include <unistd.h> +#include <arpa/inet.h> +#include <bpf/bpf.h> +#include <bpf/libbpf.h> +#include <pthread.h> + +#include <network_helpers.h> + +#include "xdp_features.skel.h" +#include "xdp_features.h" + +#define RED(str) "\033[0;31m" str "\033[0m" +#define GREEN(str) "\033[0;32m" str "\033[0m" +#define YELLOW(str) "\033[0;33m" str "\033[0m" + +static struct env { + bool verbosity; + int ifindex; + bool is_tester; + struct { + enum netdev_xdp_act drv_feature; + enum xdp_action action; + } feature; + struct sockaddr_storage dut_ctrl_addr; + struct sockaddr_storage dut_addr; + struct sockaddr_storage tester_addr; +} env; + +#define BUFSIZE 128 + +void test__fail(void) { /* for network_helpers.c */ } + +static int libbpf_print_fn(enum libbpf_print_level level, + const char *format, va_list args) +{ + if (level == LIBBPF_DEBUG && !env.verbosity) + return 0; + return vfprintf(stderr, format, args); +} + +static volatile bool exiting; + +static void sig_handler(int sig) +{ + exiting = true; +} + +const char *argp_program_version = "xdp-features 0.0"; +const char argp_program_doc[] = +"XDP features detection application.\n" +"\n" +"XDP features application checks the XDP advertised features match detected ones.\n" +"\n" +"USAGE: ./xdp-features [-vt] [-f <xdp-feature>] [-D <dut-data-ip>] [-T <tester-data-ip>] [-C <dut-ctrl-ip>] <iface-name>\n" +"\n" +"dut-data-ip, tester-data-ip, dut-ctrl-ip: IPv6 or IPv4-mapped-IPv6 addresses;\n" +"\n" +"XDP features\n:" +"- XDP_PASS\n" +"- XDP_DROP\n" +"- XDP_ABORTED\n" +"- XDP_REDIRECT\n" +"- XDP_NDO_XMIT\n" +"- XDP_TX\n"; + +static const struct argp_option opts[] = { + { "verbose", 'v', NULL, 0, "Verbose debug output" }, + { "tester", 't', NULL, 0, "Tester mode" }, + { "feature", 'f', "XDP-FEATURE", 0, "XDP feature to test" }, + { "dut_data_ip", 'D', "DUT-DATA-IP", 0, "DUT IP data channel" }, + { "dut_ctrl_ip", 'C', "DUT-CTRL-IP", 0, "DUT IP control channel" }, + { "tester_data_ip", 'T', "TESTER-DATA-IP", 0, "Tester IP data channel" }, + {}, +}; + +static int get_xdp_feature(const char *arg) +{ + if (!strcmp(arg, "XDP_PASS")) { + env.feature.action = XDP_PASS; + env.feature.drv_feature = NETDEV_XDP_ACT_BASIC; + } else if (!strcmp(arg, "XDP_DROP")) { + env.feature.drv_feature = NETDEV_XDP_ACT_BASIC; + env.feature.action = XDP_DROP; + } else if (!strcmp(arg, "XDP_ABORTED")) { + env.feature.drv_feature = NETDEV_XDP_ACT_BASIC; + env.feature.action = XDP_ABORTED; + } else if (!strcmp(arg, "XDP_TX")) { + env.feature.drv_feature = NETDEV_XDP_ACT_BASIC; + env.feature.action = XDP_TX; + } else if (!strcmp(arg, "XDP_REDIRECT")) { + env.feature.drv_feature = NETDEV_XDP_ACT_REDIRECT; + env.feature.action = XDP_REDIRECT; + } else if (!strcmp(arg, "XDP_NDO_XMIT")) { + env.feature.drv_feature = NETDEV_XDP_ACT_NDO_XMIT; + } else { + return -EINVAL; + } + + return 0; +} + +static char *get_xdp_feature_str(void) +{ + switch (env.feature.action) { + case XDP_PASS: + return YELLOW("XDP_PASS"); + case XDP_DROP: + return YELLOW("XDP_DROP"); + case XDP_ABORTED: + return YELLOW("XDP_ABORTED"); + case XDP_TX: + return YELLOW("XDP_TX"); + case XDP_REDIRECT: + return YELLOW("XDP_REDIRECT"); + default: + break; + } + + if (env.feature.drv_feature == NETDEV_XDP_ACT_NDO_XMIT) + return YELLOW("XDP_NDO_XMIT"); + + return ""; +} + +static error_t parse_arg(int key, char *arg, struct argp_state *state) +{ + switch (key) { + case 'v': + env.verbosity = true; + break; + case 't': + env.is_tester = true; + break; + case 'f': + if (get_xdp_feature(arg) < 0) { + fprintf(stderr, "Invalid xdp feature: %s\n", arg); + argp_usage(state); + return ARGP_ERR_UNKNOWN; + } + break; + case 'D': + if (make_sockaddr(AF_INET6, arg, DUT_ECHO_PORT, + &env.dut_addr, NULL)) { + fprintf(stderr, "Invalid DUT address: %s\n", arg); + return ARGP_ERR_UNKNOWN; + } + break; + case 'C': + if (make_sockaddr(AF_INET6, arg, DUT_CTRL_PORT, + &env.dut_ctrl_addr, NULL)) { + fprintf(stderr, "Invalid DUT CTRL address: %s\n", arg); + return ARGP_ERR_UNKNOWN; + } + break; + case 'T': + if (make_sockaddr(AF_INET6, arg, 0, &env.tester_addr, NULL)) { + fprintf(stderr, "Invalid Tester address: %s\n", arg); + return ARGP_ERR_UNKNOWN; + } + break; + case ARGP_KEY_ARG: + errno = 0; + if (strlen(arg) >= IF_NAMESIZE) { + fprintf(stderr, "Invalid device name: %s\n", arg); + argp_usage(state); + return ARGP_ERR_UNKNOWN; + } + + env.ifindex = if_nametoindex(arg); + if (!env.ifindex) + env.ifindex = strtoul(arg, NULL, 0); + if (!env.ifindex) { + fprintf(stderr, + "Bad interface index or name (%d): %s\n", + errno, strerror(errno)); + argp_usage(state); + return ARGP_ERR_UNKNOWN; + } + break; + default: + return ARGP_ERR_UNKNOWN; + } + + return 0; +} + +static const struct argp argp = { + .options = opts, + .parser = parse_arg, + .doc = argp_program_doc, +}; + +static void set_env_default(void) +{ + env.feature.drv_feature = NETDEV_XDP_ACT_NDO_XMIT; + env.feature.action = -EINVAL; + env.ifindex = -ENODEV; + make_sockaddr(AF_INET6, "::ffff:127.0.0.1", DUT_CTRL_PORT, + &env.dut_ctrl_addr, NULL); + make_sockaddr(AF_INET6, "::ffff:127.0.0.1", DUT_ECHO_PORT, + &env.dut_addr, NULL); + make_sockaddr(AF_INET6, "::ffff:127.0.0.1", 0, &env.tester_addr, NULL); +} + +static void *dut_echo_thread(void *arg) +{ + unsigned char buf[sizeof(struct tlv_hdr)]; + int sockfd = *(int *)arg; + + while (!exiting) { + struct tlv_hdr *tlv = (struct tlv_hdr *)buf; + struct sockaddr_storage addr; + socklen_t addrlen; + size_t n; + + n = recvfrom(sockfd, buf, sizeof(buf), MSG_WAITALL, + (struct sockaddr *)&addr, &addrlen); + if (n != ntohs(tlv->len)) + continue; + + if (ntohs(tlv->type) != CMD_ECHO) + continue; + + sendto(sockfd, buf, sizeof(buf), MSG_NOSIGNAL | MSG_CONFIRM, + (struct sockaddr *)&addr, addrlen); + } + + pthread_exit((void *)0); + close(sockfd); + + return NULL; +} + +static int dut_run_echo_thread(pthread_t *t, int *sockfd) +{ + int err; + + sockfd = start_reuseport_server(AF_INET6, SOCK_DGRAM, NULL, + DUT_ECHO_PORT, 0, 1); + if (!sockfd) { + fprintf(stderr, "Failed to create echo socket\n"); + return -errno; + } + + /* start echo channel */ + err = pthread_create(t, NULL, dut_echo_thread, sockfd); + if (err) { + fprintf(stderr, "Failed creating dut_echo thread: %s\n", + strerror(-err)); + free_fds(sockfd, 1); + return -EINVAL; + } + + return 0; +} + +static int dut_attach_xdp_prog(struct xdp_features *skel, int flags) +{ + enum xdp_action action = env.feature.action; + struct bpf_program *prog; + unsigned int key = 0; + int err, fd = 0; + + if (env.feature.drv_feature == NETDEV_XDP_ACT_NDO_XMIT) { + struct bpf_devmap_val entry = { + .ifindex = env.ifindex, + }; + + err = bpf_map__update_elem(skel->maps.dev_map, + &key, sizeof(key), + &entry, sizeof(entry), 0); + if (err < 0) + return err; + + fd = bpf_program__fd(skel->progs.xdp_do_redirect_cpumap); + action = XDP_REDIRECT; + } + + switch (action) { + case XDP_TX: + prog = skel->progs.xdp_do_tx; + break; + case XDP_DROP: + prog = skel->progs.xdp_do_drop; + break; + case XDP_ABORTED: + prog = skel->progs.xdp_do_aborted; + break; + case XDP_PASS: + prog = skel->progs.xdp_do_pass; + break; + case XDP_REDIRECT: { + struct bpf_cpumap_val entry = { + .qsize = 2048, + .bpf_prog.fd = fd, + }; + + err = bpf_map__update_elem(skel->maps.cpu_map, + &key, sizeof(key), + &entry, sizeof(entry), 0); + if (err < 0) + return err; + + prog = skel->progs.xdp_do_redirect; + break; + } + default: + return -EINVAL; + } + + err = bpf_xdp_attach(env.ifindex, bpf_program__fd(prog), flags, NULL); + if (err) + fprintf(stderr, + "Failed to attach XDP program to ifindex %d\n", + env.ifindex); + return err; +} + +static int recv_msg(int sockfd, void *buf, size_t bufsize, void *val, + size_t val_size) +{ + struct tlv_hdr *tlv = (struct tlv_hdr *)buf; + size_t len; + + len = recv(sockfd, buf, bufsize, 0); + if (len != ntohs(tlv->len) || len < sizeof(*tlv)) + return -EINVAL; + + if (val) { + len -= sizeof(*tlv); + if (len > val_size) + return -ENOMEM; + + memcpy(val, tlv->data, len); + } + + return 0; +} + +static int dut_run(struct xdp_features *skel) +{ + int flags = XDP_FLAGS_UPDATE_IF_NOEXIST | XDP_FLAGS_DRV_MODE; + int state, err, *sockfd, ctrl_sockfd, echo_sockfd; + struct sockaddr_storage ctrl_addr; + pthread_t dut_thread; + socklen_t addrlen; + + sockfd = start_reuseport_server(AF_INET6, SOCK_STREAM, NULL, + DUT_CTRL_PORT, 0, 1); + if (!sockfd) { + fprintf(stderr, "Failed to create DUT socket\n"); + return -errno; + } + + ctrl_sockfd = accept(*sockfd, (struct sockaddr *)&ctrl_addr, &addrlen); + if (ctrl_sockfd < 0) { + fprintf(stderr, "Failed to accept connection on DUT socket\n"); + free_fds(sockfd, 1); + return -errno; + } + + /* CTRL loop */ + while (!exiting) { + unsigned char buf[BUFSIZE] = {}; + struct tlv_hdr *tlv = (struct tlv_hdr *)buf; + + err = recv_msg(ctrl_sockfd, buf, BUFSIZE, NULL, 0); + if (err) + continue; + + switch (ntohs(tlv->type)) { + case CMD_START: { + if (state == CMD_START) + continue; + + state = CMD_START; + /* Load the XDP program on the DUT */ + err = dut_attach_xdp_prog(skel, flags); + if (err) + goto out; + + err = dut_run_echo_thread(&dut_thread, &echo_sockfd); + if (err < 0) + goto out; + + tlv->type = htons(CMD_ACK); + tlv->len = htons(sizeof(*tlv)); + err = send(ctrl_sockfd, buf, sizeof(*tlv), 0); + if (err < 0) + goto end_thread; + break; + } + case CMD_STOP: + if (state != CMD_START) + break; + + state = CMD_STOP; + + exiting = true; + bpf_xdp_detach(env.ifindex, flags, NULL); + + tlv->type = htons(CMD_ACK); + tlv->len = htons(sizeof(*tlv)); + err = send(ctrl_sockfd, buf, sizeof(*tlv), 0); + goto end_thread; + case CMD_GET_XDP_CAP: { + LIBBPF_OPTS(bpf_xdp_query_opts, opts); + unsigned long long val; + size_t n; + + err = bpf_xdp_query(env.ifindex, XDP_FLAGS_DRV_MODE, + &opts); + if (err) { + fprintf(stderr, + "Failed to query XDP cap for ifindex %d\n", + env.ifindex); + goto end_thread; + } + + tlv->type = htons(CMD_ACK); + n = sizeof(*tlv) + sizeof(opts.feature_flags); + tlv->len = htons(n); + + val = htobe64(opts.feature_flags); + memcpy(tlv->data, &val, sizeof(val)); + + err = send(ctrl_sockfd, buf, n, 0); + if (err < 0) + goto end_thread; + break; + } + case CMD_GET_STATS: { + unsigned int key = 0, val; + size_t n; + + err = bpf_map__lookup_elem(skel->maps.dut_stats, + &key, sizeof(key), + &val, sizeof(val), 0); + if (err) { + fprintf(stderr, "bpf_map_lookup_elem failed\n"); + goto end_thread; + } + + tlv->type = htons(CMD_ACK); + n = sizeof(*tlv) + sizeof(val); + tlv->len = htons(n); + + val = htonl(val); + memcpy(tlv->data, &val, sizeof(val)); + + err = send(ctrl_sockfd, buf, n, 0); + if (err < 0) + goto end_thread; + break; + } + default: + break; + } + } + +end_thread: + pthread_join(dut_thread, NULL); +out: + bpf_xdp_detach(env.ifindex, flags, NULL); + close(ctrl_sockfd); + free_fds(sockfd, 1); + + return err; +} + +static bool tester_collect_detected_cap(struct xdp_features *skel, + unsigned int dut_stats) +{ + unsigned int err, key = 0, val; + + if (!dut_stats) + return false; + + err = bpf_map__lookup_elem(skel->maps.stats, &key, sizeof(key), + &val, sizeof(val), 0); + if (err) { + fprintf(stderr, "bpf_map_lookup_elem failed\n"); + return false; + } + + switch (env.feature.action) { + case XDP_PASS: + case XDP_TX: + case XDP_REDIRECT: + return val > 0; + case XDP_DROP: + case XDP_ABORTED: + return val == 0; + default: + break; + } + + if (env.feature.drv_feature == NETDEV_XDP_ACT_NDO_XMIT) + return val > 0; + + return false; +} + +static int send_and_recv_msg(int sockfd, enum test_commands cmd, void *val, + size_t val_size) +{ + unsigned char buf[BUFSIZE] = {}; + struct tlv_hdr *tlv = (struct tlv_hdr *)buf; + int err; + + tlv->type = htons(cmd); + tlv->len = htons(sizeof(*tlv)); + + err = send(sockfd, buf, sizeof(*tlv), 0); + if (err < 0) + return err; + + err = recv_msg(sockfd, buf, BUFSIZE, val, val_size); + if (err < 0) + return err; + + return ntohs(tlv->type) == CMD_ACK ? 0 : -EINVAL; +} + +static int send_echo_msg(void) +{ + unsigned char buf[sizeof(struct tlv_hdr)]; + struct tlv_hdr *tlv = (struct tlv_hdr *)buf; + int sockfd, n; + + sockfd = socket(AF_INET6, SOCK_DGRAM, 0); + if (sockfd < 0) { + fprintf(stderr, "Failed to create echo socket\n"); + return -errno; + } + + tlv->type = htons(CMD_ECHO); + tlv->len = htons(sizeof(*tlv)); + + n = sendto(sockfd, buf, sizeof(*tlv), MSG_NOSIGNAL | MSG_CONFIRM, + (struct sockaddr *)&env.dut_addr, sizeof(env.dut_addr)); + close(sockfd); + + return n == ntohs(tlv->len) ? 0 : -EINVAL; +} + +static int tester_run(struct xdp_features *skel) +{ + int flags = XDP_FLAGS_UPDATE_IF_NOEXIST | XDP_FLAGS_DRV_MODE; + unsigned long long advertised_feature; + struct bpf_program *prog; + unsigned int stats; + int i, err, sockfd; + bool detected_cap; + + sockfd = socket(AF_INET6, SOCK_STREAM, 0); + if (sockfd < 0) { + fprintf(stderr, "Failed to create tester socket\n"); + return -errno; + } + + if (settimeo(sockfd, 1000) < 0) + return -EINVAL; + + err = connect(sockfd, (struct sockaddr *)&env.dut_ctrl_addr, + sizeof(env.dut_ctrl_addr)); + if (err) { + fprintf(stderr, "Failed to connect to the DUT\n"); + return -errno; + } + + err = send_and_recv_msg(sockfd, CMD_GET_XDP_CAP, &advertised_feature, + sizeof(advertised_feature)); + if (err < 0) { + close(sockfd); + return err; + } + + advertised_feature = be64toh(advertised_feature); + + if (env.feature.drv_feature == NETDEV_XDP_ACT_NDO_XMIT || + env.feature.action == XDP_TX) + prog = skel->progs.xdp_tester_check_tx; + else + prog = skel->progs.xdp_tester_check_rx; + + err = bpf_xdp_attach(env.ifindex, bpf_program__fd(prog), flags, NULL); + if (err) { + fprintf(stderr, "Failed to attach XDP program to ifindex %d\n", + env.ifindex); + goto out; + } + + err = send_and_recv_msg(sockfd, CMD_START, NULL, 0); + if (err) + goto out; + + for (i = 0; i < 10 && !exiting; i++) { + err = send_echo_msg(); + if (err < 0) + goto out; + + sleep(1); + } + + err = send_and_recv_msg(sockfd, CMD_GET_STATS, &stats, sizeof(stats)); + if (err) + goto out; + + /* stop the test */ + err = send_and_recv_msg(sockfd, CMD_STOP, NULL, 0); + /* send a new echo message to wake echo thread of the dut */ + send_echo_msg(); + + detected_cap = tester_collect_detected_cap(skel, ntohl(stats)); + + fprintf(stdout, "Feature %s: [%s][%s]\n", get_xdp_feature_str(), + detected_cap ? GREEN("DETECTED") : RED("NOT DETECTED"), + env.feature.drv_feature & advertised_feature ? GREEN("ADVERTISED") + : RED("NOT ADVERTISED")); +out: + bpf_xdp_detach(env.ifindex, flags, NULL); + close(sockfd); + return err < 0 ? err : 0; +} + +int main(int argc, char **argv) +{ + struct xdp_features *skel; + int err; + + libbpf_set_strict_mode(LIBBPF_STRICT_ALL); + libbpf_set_print(libbpf_print_fn); + + signal(SIGINT, sig_handler); + signal(SIGTERM, sig_handler); + + set_env_default(); + + /* Parse command line arguments */ + err = argp_parse(&argp, argc, argv, 0, NULL, NULL); + if (err) + return err; + + if (env.ifindex < 0) { + fprintf(stderr, "Invalid ifindex\n"); + return -ENODEV; + } + + /* Load and verify BPF application */ + skel = xdp_features__open(); + if (!skel) { + fprintf(stderr, "Failed to open and load BPF skeleton\n"); + return -EINVAL; + } + + skel->rodata->tester_addr = + ((struct sockaddr_in6 *)&env.tester_addr)->sin6_addr; + skel->rodata->dut_addr = + ((struct sockaddr_in6 *)&env.dut_addr)->sin6_addr; + + /* Load & verify BPF programs */ + err = xdp_features__load(skel); + if (err) { + fprintf(stderr, "Failed to load and verify BPF skeleton\n"); + goto cleanup; + } + + err = xdp_features__attach(skel); + if (err) { + fprintf(stderr, "Failed to attach BPF skeleton\n"); + goto cleanup; + } + + if (env.is_tester) { + /* Tester */ + fprintf(stdout, "Starting tester on device %d\n", env.ifindex); + err = tester_run(skel); + } else { + /* DUT */ + fprintf(stdout, "Starting DUT on device %d\n", env.ifindex); + err = dut_run(skel); + } + +cleanup: + xdp_features__destroy(skel); + + return err < 0 ? -err : 0; +} diff --git a/tools/testing/selftests/bpf/xdp_features.h b/tools/testing/selftests/bpf/xdp_features.h new file mode 100644 index 000000000000..2670c541713b --- /dev/null +++ b/tools/testing/selftests/bpf/xdp_features.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* test commands */ +enum test_commands { + CMD_STOP, /* CMD */ + CMD_START, /* CMD */ + CMD_ECHO, /* CMD */ + CMD_ACK, /* CMD + data */ + CMD_GET_XDP_CAP, /* CMD */ + CMD_GET_STATS, /* CMD */ +}; + +#define DUT_CTRL_PORT 12345 +#define DUT_ECHO_PORT 12346 + +struct tlv_hdr { + __be16 type; + __be16 len; + __u8 data[]; +}; diff --git a/tools/testing/selftests/bpf/xdp_hw_metadata.c b/tools/testing/selftests/bpf/xdp_hw_metadata.c index 3823b1c499cc..1c8acb68b977 100644 --- a/tools/testing/selftests/bpf/xdp_hw_metadata.c +++ b/tools/testing/selftests/bpf/xdp_hw_metadata.c @@ -24,7 +24,6 @@ #include <linux/net_tstamp.h> #include <linux/udp.h> #include <linux/sockios.h> -#include <linux/net_tstamp.h> #include <sys/mman.h> #include <net/if.h> #include <poll.h> @@ -121,7 +120,7 @@ static void close_xsk(struct xsk *xsk) xsk_umem__delete(xsk->umem); if (xsk->socket) xsk_socket__delete(xsk->socket); - munmap(xsk->umem, UMEM_SIZE); + munmap(xsk->umem_area, UMEM_SIZE); } static void refill_rx(struct xsk *xsk, __u64 addr) @@ -165,7 +164,7 @@ static void verify_skb_metadata(int fd) hdr.msg_controllen = sizeof(cmsg_buf); if (recvmsg(fd, &hdr, 0) < 0) - error(-1, errno, "recvmsg"); + error(1, errno, "recvmsg"); for (cmsg = CMSG_FIRSTHDR(&hdr); cmsg != NULL; cmsg = CMSG_NXTHDR(&hdr, cmsg)) { @@ -270,16 +269,16 @@ static int rxq_num(const char *ifname) struct ifreq ifr = { .ifr_data = (void *)&ch, }; - strcpy(ifr.ifr_name, ifname); + strncpy(ifr.ifr_name, ifname, IF_NAMESIZE - 1); int fd, ret; fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (fd < 0) - error(-1, errno, "socket"); + error(1, errno, "socket"); ret = ioctl(fd, SIOCETHTOOL, &ifr); if (ret < 0) - error(-1, errno, "ioctl(SIOCETHTOOL)"); + error(1, errno, "ioctl(SIOCETHTOOL)"); close(fd); @@ -291,16 +290,16 @@ static void hwtstamp_ioctl(int op, const char *ifname, struct hwtstamp_config *c struct ifreq ifr = { .ifr_data = (void *)cfg, }; - strcpy(ifr.ifr_name, ifname); + strncpy(ifr.ifr_name, ifname, IF_NAMESIZE - 1); int fd, ret; fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (fd < 0) - error(-1, errno, "socket"); + error(1, errno, "socket"); ret = ioctl(fd, op, &ifr); if (ret < 0) - error(-1, errno, "ioctl(%d)", op); + error(1, errno, "ioctl(%d)", op); close(fd); } @@ -360,7 +359,7 @@ static void timestamping_enable(int fd, int val) ret = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, &val, sizeof(val)); if (ret < 0) - error(-1, errno, "setsockopt(SO_TIMESTAMPING)"); + error(1, errno, "setsockopt(SO_TIMESTAMPING)"); } int main(int argc, char *argv[]) @@ -386,13 +385,13 @@ int main(int argc, char *argv[]) rx_xsk = malloc(sizeof(struct xsk) * rxq); if (!rx_xsk) - error(-1, ENOMEM, "malloc"); + error(1, ENOMEM, "malloc"); for (i = 0; i < rxq; i++) { printf("open_xsk(%s, %p, %d)\n", ifname, &rx_xsk[i], i); ret = open_xsk(ifindex, &rx_xsk[i], i); if (ret) - error(-1, -ret, "open_xsk"); + error(1, -ret, "open_xsk"); printf("xsk_socket__fd() -> %d\n", xsk_socket__fd(rx_xsk[i].socket)); } @@ -400,7 +399,7 @@ int main(int argc, char *argv[]) printf("open bpf program...\n"); bpf_obj = xdp_hw_metadata__open(); if (libbpf_get_error(bpf_obj)) - error(-1, libbpf_get_error(bpf_obj), "xdp_hw_metadata__open"); + error(1, libbpf_get_error(bpf_obj), "xdp_hw_metadata__open"); prog = bpf_object__find_program_by_name(bpf_obj->obj, "rx"); bpf_program__set_ifindex(prog, ifindex); @@ -409,12 +408,12 @@ int main(int argc, char *argv[]) printf("load bpf program...\n"); ret = xdp_hw_metadata__load(bpf_obj); if (ret) - error(-1, -ret, "xdp_hw_metadata__load"); + error(1, -ret, "xdp_hw_metadata__load"); printf("prepare skb endpoint...\n"); server_fd = start_server(AF_INET6, SOCK_DGRAM, NULL, 9092, 1000); if (server_fd < 0) - error(-1, errno, "start_server"); + error(1, errno, "start_server"); timestamping_enable(server_fd, SOF_TIMESTAMPING_SOFTWARE | SOF_TIMESTAMPING_RAW_HARDWARE); @@ -427,7 +426,7 @@ int main(int argc, char *argv[]) printf("map[%d] = %d\n", queue_id, sock_fd); ret = bpf_map_update_elem(bpf_map__fd(bpf_obj->maps.xsk), &queue_id, &sock_fd, 0); if (ret) - error(-1, -ret, "bpf_map_update_elem"); + error(1, -ret, "bpf_map_update_elem"); } printf("attach bpf program...\n"); @@ -435,12 +434,12 @@ int main(int argc, char *argv[]) bpf_program__fd(bpf_obj->progs.rx), XDP_FLAGS, NULL); if (ret) - error(-1, -ret, "bpf_xdp_attach"); + error(1, -ret, "bpf_xdp_attach"); signal(SIGINT, handle_signal); ret = verify_metadata(rx_xsk, rxq, server_fd); close(server_fd); cleanup(); if (ret) - error(-1, -ret, "verify_metadata"); + error(1, -ret, "verify_metadata"); } diff --git a/tools/testing/selftests/bpf/xdp_synproxy.c b/tools/testing/selftests/bpf/xdp_synproxy.c index 410a1385a01d..6dbe0b745198 100644 --- a/tools/testing/selftests/bpf/xdp_synproxy.c +++ b/tools/testing/selftests/bpf/xdp_synproxy.c @@ -116,6 +116,7 @@ static void parse_options(int argc, char *argv[], unsigned int *ifindex, __u32 * *tcpipopts = 0; *ports = NULL; *single = false; + *tc = false; while (true) { int opt; diff --git a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh index 9c79bbcce5a8..aff0a59f92d9 100755 --- a/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh +++ b/tools/testing/selftests/drivers/net/ocelot/tc_flower_chains.sh @@ -246,7 +246,7 @@ test_vlan_ingress_modify() bridge vlan add dev $swp2 vid 300 tc filter add dev $swp1 ingress chain $(IS1 2) pref 3 \ - protocol 802.1Q flower skip_sw vlan_id 200 \ + protocol 802.1Q flower skip_sw vlan_id 200 src_mac $h1_mac \ action vlan modify id 300 \ action goto chain $(IS2 0 0) diff --git a/tools/testing/selftests/filesystems/fat/run_fat_tests.sh b/tools/testing/selftests/filesystems/fat/run_fat_tests.sh index 7f35dc3d15df..7f35dc3d15df 100644..100755 --- a/tools/testing/selftests/filesystems/fat/run_fat_tests.sh +++ b/tools/testing/selftests/filesystems/fat/run_fat_tests.sh diff --git a/tools/testing/selftests/kvm/aarch64/page_fault_test.c b/tools/testing/selftests/kvm/aarch64/page_fault_test.c index beb944fa6fd4..54680dc5887f 100644 --- a/tools/testing/selftests/kvm/aarch64/page_fault_test.c +++ b/tools/testing/selftests/kvm/aarch64/page_fault_test.c @@ -237,6 +237,11 @@ static void guest_check_s1ptw_wr_in_dirty_log(void) GUEST_SYNC(CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG); } +static void guest_check_no_s1ptw_wr_in_dirty_log(void) +{ + GUEST_SYNC(CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG); +} + static void guest_exec(void) { int (*code)(void) = (int (*)(void))TEST_EXEC_GVA; @@ -304,7 +309,7 @@ static struct uffd_args { /* Returns true to continue the test, and false if it should be skipped. */ static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg, - struct uffd_args *args, bool expect_write) + struct uffd_args *args) { uint64_t addr = msg->arg.pagefault.address; uint64_t flags = msg->arg.pagefault.flags; @@ -313,7 +318,6 @@ static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg, TEST_ASSERT(uffd_mode == UFFDIO_REGISTER_MODE_MISSING, "The only expected UFFD mode is MISSING"); - ASSERT_EQ(!!(flags & UFFD_PAGEFAULT_FLAG_WRITE), expect_write); ASSERT_EQ(addr, (uint64_t)args->hva); pr_debug("uffd fault: addr=%p write=%d\n", @@ -337,19 +341,14 @@ static int uffd_generic_handler(int uffd_mode, int uffd, struct uffd_msg *msg, return 0; } -static int uffd_pt_write_handler(int mode, int uffd, struct uffd_msg *msg) -{ - return uffd_generic_handler(mode, uffd, msg, &pt_args, true); -} - -static int uffd_data_write_handler(int mode, int uffd, struct uffd_msg *msg) +static int uffd_pt_handler(int mode, int uffd, struct uffd_msg *msg) { - return uffd_generic_handler(mode, uffd, msg, &data_args, true); + return uffd_generic_handler(mode, uffd, msg, &pt_args); } -static int uffd_data_read_handler(int mode, int uffd, struct uffd_msg *msg) +static int uffd_data_handler(int mode, int uffd, struct uffd_msg *msg) { - return uffd_generic_handler(mode, uffd, msg, &data_args, false); + return uffd_generic_handler(mode, uffd, msg, &data_args); } static void setup_uffd_args(struct userspace_mem_region *region, @@ -471,9 +470,12 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) { struct userspace_mem_region *data_region, *pt_region; bool continue_test = true; + uint64_t pte_gpa, pte_pg; data_region = vm_get_mem_region(vm, MEM_REGION_TEST_DATA); pt_region = vm_get_mem_region(vm, MEM_REGION_PT); + pte_gpa = addr_hva2gpa(vm, virt_get_pte_hva(vm, TEST_GVA)); + pte_pg = (pte_gpa - pt_region->region.guest_phys_addr) / getpagesize(); if (cmd == CMD_SKIP_TEST) continue_test = false; @@ -486,13 +488,13 @@ static bool handle_cmd(struct kvm_vm *vm, int cmd) TEST_ASSERT(check_write_in_dirty_log(vm, data_region, 0), "Missing write in dirty log"); if (cmd & CMD_CHECK_S1PTW_WR_IN_DIRTY_LOG) - TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, 0), + TEST_ASSERT(check_write_in_dirty_log(vm, pt_region, pte_pg), "Missing s1ptw write in dirty log"); if (cmd & CMD_CHECK_NO_WRITE_IN_DIRTY_LOG) TEST_ASSERT(!check_write_in_dirty_log(vm, data_region, 0), "Unexpected write in dirty log"); if (cmd & CMD_CHECK_NO_S1PTW_WR_IN_DIRTY_LOG) - TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, 0), + TEST_ASSERT(!check_write_in_dirty_log(vm, pt_region, pte_pg), "Unexpected s1ptw write in dirty log"); return continue_test; @@ -797,7 +799,7 @@ static void help(char *name) .expected_events = { .uffd_faults = _uffd_faults, }, \ } -#define TEST_DIRTY_LOG(_access, _with_af, _test_check) \ +#define TEST_DIRTY_LOG(_access, _with_af, _test_check, _pt_check) \ { \ .name = SCAT3(dirty_log, _access, _with_af), \ .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ @@ -805,13 +807,12 @@ static void help(char *name) .guest_prepare = { _PREPARE(_with_af), \ _PREPARE(_access) }, \ .guest_test = _access, \ - .guest_test_check = { _CHECK(_with_af), _test_check, \ - guest_check_s1ptw_wr_in_dirty_log}, \ + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ .expected_events = { 0 }, \ } #define TEST_UFFD_AND_DIRTY_LOG(_access, _with_af, _uffd_data_handler, \ - _uffd_faults, _test_check) \ + _uffd_faults, _test_check, _pt_check) \ { \ .name = SCAT3(uffd_and_dirty_log, _access, _with_af), \ .data_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ @@ -820,16 +821,17 @@ static void help(char *name) _PREPARE(_access) }, \ .guest_test = _access, \ .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ - .guest_test_check = { _CHECK(_with_af), _test_check }, \ + .guest_test_check = { _CHECK(_with_af), _test_check, _pt_check }, \ .uffd_data_handler = _uffd_data_handler, \ - .uffd_pt_handler = uffd_pt_write_handler, \ + .uffd_pt_handler = uffd_pt_handler, \ .expected_events = { .uffd_faults = _uffd_faults, }, \ } #define TEST_RO_MEMSLOT(_access, _mmio_handler, _mmio_exits) \ { \ - .name = SCAT3(ro_memslot, _access, _with_af), \ + .name = SCAT2(ro_memslot, _access), \ .data_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ .guest_prepare = { _PREPARE(_access) }, \ .guest_test = _access, \ .mmio_handler = _mmio_handler, \ @@ -840,6 +842,7 @@ static void help(char *name) { \ .name = SCAT2(ro_memslot_no_syndrome, _access), \ .data_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ .guest_test = _access, \ .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ .expected_events = { .fail_vcpu_runs = 1 }, \ @@ -848,9 +851,9 @@ static void help(char *name) #define TEST_RO_MEMSLOT_AND_DIRTY_LOG(_access, _mmio_handler, _mmio_exits, \ _test_check) \ { \ - .name = SCAT3(ro_memslot, _access, _with_af), \ + .name = SCAT2(ro_memslot, _access), \ .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ .guest_prepare = { _PREPARE(_access) }, \ .guest_test = _access, \ .guest_test_check = { _test_check }, \ @@ -862,7 +865,7 @@ static void help(char *name) { \ .name = SCAT2(ro_memslot_no_syn_and_dlog, _access), \ .data_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ - .pt_memslot_flags = KVM_MEM_LOG_DIRTY_PAGES, \ + .pt_memslot_flags = KVM_MEM_READONLY | KVM_MEM_LOG_DIRTY_PAGES, \ .guest_test = _access, \ .guest_test_check = { _test_check }, \ .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ @@ -874,11 +877,12 @@ static void help(char *name) { \ .name = SCAT2(ro_memslot_uffd, _access), \ .data_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ .guest_prepare = { _PREPARE(_access) }, \ .guest_test = _access, \ .uffd_data_handler = _uffd_data_handler, \ - .uffd_pt_handler = uffd_pt_write_handler, \ + .uffd_pt_handler = uffd_pt_handler, \ .mmio_handler = _mmio_handler, \ .expected_events = { .mmio_exits = _mmio_exits, \ .uffd_faults = _uffd_faults }, \ @@ -889,10 +893,11 @@ static void help(char *name) { \ .name = SCAT2(ro_memslot_no_syndrome, _access), \ .data_memslot_flags = KVM_MEM_READONLY, \ + .pt_memslot_flags = KVM_MEM_READONLY, \ .mem_mark_cmd = CMD_HOLE_DATA | CMD_HOLE_PT, \ .guest_test = _access, \ .uffd_data_handler = _uffd_data_handler, \ - .uffd_pt_handler = uffd_pt_write_handler, \ + .uffd_pt_handler = uffd_pt_handler, \ .fail_vcpu_run_handler = fail_vcpu_run_mmio_no_syndrome_handler, \ .expected_events = { .fail_vcpu_runs = 1, \ .uffd_faults = _uffd_faults }, \ @@ -933,44 +938,51 @@ static struct test_desc tests[] = { * (S1PTW). */ TEST_UFFD(guest_read64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 2), - /* no_af should also lead to a PT write. */ + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_read64, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 2), - /* Note how that cas invokes the read handler. */ + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_cas, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), /* * Can't test guest_at with_af as it's IMPDEF whether the AF is set. * The S1PTW fault should still be marked as a write. */ TEST_UFFD(guest_at, no_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 1), + uffd_no_handler, uffd_pt_handler, 1), TEST_UFFD(guest_ld_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_write64, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_write_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_dc_zva, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_write_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_st_preidx, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_write_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), TEST_UFFD(guest_exec, with_af, CMD_HOLE_DATA | CMD_HOLE_PT, - uffd_data_read_handler, uffd_pt_write_handler, 2), + uffd_data_handler, uffd_pt_handler, 2), /* * Try accesses when the data and PT memory regions are both * tracked for dirty logging. */ - TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log), - /* no_af should also lead to a PT write. */ - TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log), - TEST_DIRTY_LOG(guest_ld_preidx, with_af, guest_check_no_write_in_dirty_log), - TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log), - TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log), - TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log), - TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log), - TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log), - TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log), + TEST_DIRTY_LOG(guest_read64, with_af, guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_read64, no_af, guest_check_no_write_in_dirty_log, + guest_check_no_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_ld_preidx, with_af, + guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_at, no_af, guest_check_no_write_in_dirty_log, + guest_check_no_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_exec, with_af, guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_write64, with_af, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_cas, with_af, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_dc_zva, with_af, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_DIRTY_LOG(guest_st_preidx, with_af, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), /* * Access when the data and PT memory regions are both marked for @@ -980,29 +992,43 @@ static struct test_desc tests[] = { * fault, and nothing in the dirty log. Any S1PTW should result in * a write in the dirty log and a userfaultfd write. */ - TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, uffd_data_read_handler, 2, - guest_check_no_write_in_dirty_log), - /* no_af should also lead to a PT write. */ - TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, uffd_data_read_handler, 2, - guest_check_no_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, uffd_data_read_handler, - 2, guest_check_no_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, 0, 1, - guest_check_no_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, uffd_data_read_handler, 2, - guest_check_no_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, uffd_data_write_handler, - 2, guest_check_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, uffd_data_read_handler, 2, - guest_check_write_in_dirty_log), - TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, uffd_data_write_handler, - 2, guest_check_write_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_read64, with_af, + uffd_data_handler, 2, + guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_read64, no_af, + uffd_data_handler, 2, + guest_check_no_write_in_dirty_log, + guest_check_no_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_ld_preidx, with_af, + uffd_data_handler, + 2, guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_at, with_af, uffd_no_handler, 1, + guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_exec, with_af, + uffd_data_handler, 2, + guest_check_no_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_write64, with_af, + uffd_data_handler, + 2, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_cas, with_af, + uffd_data_handler, 2, + guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), + TEST_UFFD_AND_DIRTY_LOG(guest_dc_zva, with_af, + uffd_data_handler, + 2, guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), TEST_UFFD_AND_DIRTY_LOG(guest_st_preidx, with_af, - uffd_data_write_handler, 2, - guest_check_write_in_dirty_log), - + uffd_data_handler, 2, + guest_check_write_in_dirty_log, + guest_check_s1ptw_wr_in_dirty_log), /* - * Try accesses when the data memory region is marked read-only + * Access when both the PT and data regions are marked read-only * (with KVM_MEM_READONLY). Writes with a syndrome result in an * MMIO exit, writes with no syndrome (e.g., CAS) result in a * failed vcpu run, and reads/execs with and without syndroms do @@ -1018,7 +1044,7 @@ static struct test_desc tests[] = { TEST_RO_MEMSLOT_NO_SYNDROME(guest_st_preidx), /* - * Access when both the data region is both read-only and marked + * The PT and data regions are both read-only and marked * for dirty logging at the same time. The expected result is that * for writes there should be no write in the dirty log. The * readonly handling is the same as if the memslot was not marked @@ -1043,7 +1069,7 @@ static struct test_desc tests[] = { guest_check_no_write_in_dirty_log), /* - * Access when the data region is both read-only and punched with + * The PT and data regions are both read-only and punched with * holes tracked with userfaultfd. The expected result is the * union of both userfaultfd and read-only behaviors. For example, * write accesses result in a userfaultfd write fault and an MMIO @@ -1051,22 +1077,15 @@ static struct test_desc tests[] = { * no userfaultfd write fault. Reads result in userfaultfd getting * triggered. */ - TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, - uffd_data_read_handler, 2), - TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, - uffd_data_read_handler, 2), - TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, - uffd_no_handler, 1), - TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, - uffd_data_read_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_read64, 0, 0, uffd_data_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_ld_preidx, 0, 0, uffd_data_handler, 2), + TEST_RO_MEMSLOT_AND_UFFD(guest_at, 0, 0, uffd_no_handler, 1), + TEST_RO_MEMSLOT_AND_UFFD(guest_exec, 0, 0, uffd_data_handler, 2), TEST_RO_MEMSLOT_AND_UFFD(guest_write64, mmio_on_test_gpa_handler, 1, - uffd_data_write_handler, 2), - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, - uffd_data_read_handler, 2), - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, - uffd_no_handler, 1), - TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, - uffd_no_handler, 1), + uffd_data_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_cas, uffd_data_handler, 2), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_dc_zva, uffd_no_handler, 1), + TEST_RO_MEMSLOT_NO_SYNDROME_AND_UFFD(guest_st_preidx, uffd_no_handler, 1), { 0 } }; diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile index 951bd5342bc6..3364c548a23b 100644 --- a/tools/testing/selftests/net/Makefile +++ b/tools/testing/selftests/net/Makefile @@ -46,6 +46,7 @@ TEST_PROGS += stress_reuseport_listen.sh TEST_PROGS += l2_tos_ttl_inherit.sh TEST_PROGS += bind_bhash.sh TEST_PROGS += ip_local_port_range.sh +TEST_PROGS += rps_default_mask.sh TEST_PROGS_EXTENDED := in_netns.sh setup_loopback.sh setup_veth.sh TEST_PROGS_EXTENDED += toeplitz_client.sh toeplitz.sh TEST_GEN_FILES = socket nettest diff --git a/tools/testing/selftests/net/config b/tools/testing/selftests/net/config index bd89198cd817..cc9fd55ab869 100644 --- a/tools/testing/selftests/net/config +++ b/tools/testing/selftests/net/config @@ -3,6 +3,9 @@ CONFIG_NET_NS=y CONFIG_BPF_SYSCALL=y CONFIG_TEST_BPF=m CONFIG_NUMA=y +CONFIG_RPS=y +CONFIG_SYSFS=y +CONFIG_PROC_SYSCTL=y CONFIG_NET_VRF=y CONFIG_NET_L3_MASTER_DEV=y CONFIG_IPV6=y diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh index b48867d8cadf..ae3f9462a2b6 100755 --- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh +++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh @@ -742,10 +742,109 @@ cfg_test_port() cfg_test_port_l2 } +ipv4_grps_get() +{ + local max_grps=$1; shift + local i + + for i in $(seq 0 $((max_grps - 1))); do + echo "239.1.1.$i" + done +} + +ipv6_grps_get() +{ + local max_grps=$1; shift + local i + + for i in $(seq 0 $((max_grps - 1))); do + echo "ff0e::$(printf %x $i)" + done +} + +l2_grps_get() +{ + local max_grps=$1; shift + local i + + for i in $(seq 0 $((max_grps - 1))); do + echo "01:00:00:00:00:$(printf %02x $i)" + done +} + +cfg_test_dump_common() +{ + local name=$1; shift + local fn=$1; shift + local max_bridges=2 + local max_grps=256 + local max_ports=32 + local num_entries + local batch_file + local grp + local i j + + RET=0 + + # Create net devices. + for i in $(seq 1 $max_bridges); do + ip link add name br-test${i} up type bridge vlan_filtering 1 \ + mcast_snooping 1 + for j in $(seq 1 $max_ports); do + ip link add name br-test${i}-du${j} up \ + master br-test${i} type dummy + done + done + + # Create batch file with MDB entries. + batch_file=$(mktemp) + for i in $(seq 1 $max_bridges); do + for j in $(seq 1 $max_ports); do + for grp in $($fn $max_grps); do + echo "mdb add dev br-test${i} \ + port br-test${i}-du${j} grp $grp \ + permanent vid 1" >> $batch_file + done + done + done + + # Program the batch file and check for expected number of entries. + bridge -b $batch_file + for i in $(seq 1 $max_bridges); do + num_entries=$(bridge mdb show dev br-test${i} | \ + grep "permanent" | wc -l) + [[ $num_entries -eq $((max_grps * max_ports)) ]] + check_err $? "Wrong number of entries in br-test${i}" + done + + # Cleanup. + rm $batch_file + for i in $(seq 1 $max_bridges); do + ip link del dev br-test${i} + for j in $(seq $max_ports); do + ip link del dev br-test${i}-du${j} + done + done + + log_test "$name large scale dump tests" +} + +# Check large scale dump. +cfg_test_dump() +{ + echo + log_info "# Large scale dump tests" + + cfg_test_dump_common "IPv4" ipv4_grps_get + cfg_test_dump_common "IPv6" ipv6_grps_get + cfg_test_dump_common "L2" l2_grps_get +} + cfg_test() { cfg_test_host cfg_test_port + cfg_test_dump } __fwd_test_host_ip() diff --git a/tools/testing/selftests/net/forwarding/lib.sh b/tools/testing/selftests/net/forwarding/lib.sh index cc7c4f89a097..d47499ba81c7 100755 --- a/tools/testing/selftests/net/forwarding/lib.sh +++ b/tools/testing/selftests/net/forwarding/lib.sh @@ -893,14 +893,14 @@ sysctl_set() local value=$1; shift SYSCTL_ORIG[$key]=$(sysctl -n $key) - sysctl -qw $key=$value + sysctl -qw $key="$value" } sysctl_restore() { local key=$1; shift - sysctl -qw $key=${SYSCTL_ORIG["$key"]} + sysctl -qw $key="${SYSCTL_ORIG[$key]}" } forwarding_enable() diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh index 387abdcec011..42e3bd1a05f5 100755 --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh @@ -498,6 +498,12 @@ kill_events_pids() kill_wait $evts_ns2_pid } +kill_tests_wait() +{ + kill -SIGUSR1 $(ip netns pids $ns2) $(ip netns pids $ns1) + wait +} + pm_nl_set_limits() { local ns=$1 @@ -1687,6 +1693,7 @@ chk_subflow_nr() local subflow_nr=$3 local cnt1 local cnt2 + local dump_stats if [ -n "${need_title}" ]; then printf "%03u %-36s %s" "${TEST_COUNT}" "${TEST_NAME}" "${msg}" @@ -1704,7 +1711,12 @@ chk_subflow_nr() echo "[ ok ]" fi - [ "${dump_stats}" = 1 ] && ( ss -N $ns1 -tOni ; ss -N $ns1 -tOni | grep token; ip -n $ns1 mptcp endpoint ) + if [ "${dump_stats}" = 1 ]; then + ss -N $ns1 -tOni + ss -N $ns1 -tOni | grep token + ip -n $ns1 mptcp endpoint + dump_stats + fi } chk_link_usage() @@ -3083,7 +3095,7 @@ endpoint_tests() pm_nl_set_limits $ns1 2 2 pm_nl_set_limits $ns2 2 2 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal - run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow & + run_tests $ns1 $ns2 10.0.1.1 0 0 0 slow 2>/dev/null & wait_mpj $ns1 pm_nl_check_endpoint 1 "creation" \ @@ -3096,14 +3108,14 @@ endpoint_tests() pm_nl_add_endpoint $ns2 10.0.2.2 flags signal pm_nl_check_endpoint 0 "modif is allowed" \ $ns2 10.0.2.2 id 1 flags signal - wait + kill_tests_wait fi if reset "delete and re-add"; then pm_nl_set_limits $ns1 1 1 pm_nl_set_limits $ns2 1 1 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow - run_tests $ns1 $ns2 10.0.1.1 4 0 0 slow & + run_tests $ns1 $ns2 10.0.1.1 4 0 0 speed_20 2>/dev/null & wait_mpj $ns2 pm_nl_del_endpoint $ns2 2 10.0.2.2 @@ -3113,7 +3125,7 @@ endpoint_tests() pm_nl_add_endpoint $ns2 10.0.2.2 dev ns2eth2 flags subflow wait_mpj $ns2 chk_subflow_nr "" "after re-add" 2 - wait + kill_tests_wait fi } diff --git a/tools/testing/selftests/net/rps_default_mask.sh b/tools/testing/selftests/net/rps_default_mask.sh new file mode 100755 index 000000000000..c81c0ac7ddfe --- /dev/null +++ b/tools/testing/selftests/net/rps_default_mask.sh @@ -0,0 +1,57 @@ +#!/bin/sh +# SPDX-License-Identifier: GPL-2.0 + +readonly ksft_skip=4 +readonly cpus=$(nproc) +ret=0 + +[ $cpus -gt 2 ] || exit $ksft_skip + +readonly INITIAL_RPS_DEFAULT_MASK=$(cat /proc/sys/net/core/rps_default_mask) +readonly NETNS="ns-$(mktemp -u XXXXXX)" + +setup() { + ip netns add "${NETNS}" + ip -netns "${NETNS}" link set lo up +} + +cleanup() { + echo $INITIAL_RPS_DEFAULT_MASK > /proc/sys/net/core/rps_default_mask + ip netns del $NETNS +} + +chk_rps() { + local rps_mask expected_rps_mask=$3 + local dev_name=$2 + local msg=$1 + + rps_mask=$(ip netns exec $NETNS cat /sys/class/net/$dev_name/queues/rx-0/rps_cpus) + printf "%-60s" "$msg" + if [ $rps_mask -eq $expected_rps_mask ]; then + echo "[ ok ]" + else + echo "[fail] expected $expected_rps_mask found $rps_mask" + ret=1 + fi +} + +trap cleanup EXIT + +echo 0 > /proc/sys/net/core/rps_default_mask +setup +chk_rps "empty rps_default_mask" lo 0 +cleanup + +echo 1 > /proc/sys/net/core/rps_default_mask +setup +chk_rps "non zero rps_default_mask" lo 1 + +echo 3 > /proc/sys/net/core/rps_default_mask +chk_rps "changing rps_default_mask dont affect existing netns" lo 1 + +ip -n $NETNS link add type veth +ip -n $NETNS link set dev veth0 up +ip -n $NETNS link set dev veth1 up +chk_rps "changing rps_default_mask affect newly created devices" veth0 3 +chk_rps "changing rps_default_mask affect newly created devices[II]" veth1 3 +exit $ret diff --git a/tools/testing/selftests/net/test_vxlan_vnifiltering.sh b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh index 704997ffc244..8c3ac0a72545 100755 --- a/tools/testing/selftests/net/test_vxlan_vnifiltering.sh +++ b/tools/testing/selftests/net/test_vxlan_vnifiltering.sh @@ -293,19 +293,11 @@ setup-vm() { elif [[ -n $vtype && $vtype == "vnifilterg" ]]; then # Add per vni group config with 'bridge vni' api if [ -n "$group" ]; then - if [ "$family" == "v4" ]; then - if [ $mcast -eq 1 ]; then - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group - else - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group - fi - else - if [ $mcast -eq 1 ]; then - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group6 $group - else - bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote6 $group - fi - fi + if [ $mcast -eq 1 ]; then + bridge -netns hv-$hvid vni add dev $vxlandev vni $tid group $group + else + bridge -netns hv-$hvid vni add dev $vxlandev vni $tid remote $group + fi fi fi done diff --git a/tools/testing/selftests/vm/hugetlb-madvise.c b/tools/testing/selftests/vm/hugetlb-madvise.c index a634f47d1e56..9a127a8fe176 100644 --- a/tools/testing/selftests/vm/hugetlb-madvise.c +++ b/tools/testing/selftests/vm/hugetlb-madvise.c @@ -17,7 +17,6 @@ #include <stdio.h> #include <unistd.h> #include <sys/mman.h> -#define __USE_GNU #include <fcntl.h> #define MIN_FREE_PAGES 20 |