aboutsummaryrefslogtreecommitdiff
path: root/tools
AgeCommit message (Collapse)AuthorFilesLines
2020-11-06bpf: Fix tests for local_storageKP Singh1-9/+15
The {inode,sk}_storage_result checking if the correct value was retrieved was being clobbered unconditionally by the return value of the bpf_{inode,sk}_storage_delete call. Also, consistently use the newly added BPF_LOCAL_STORAGE_GET_F_CREATE flag. Fixes: cd324d7abb3d ("bpf: Add selftests for local_storage") Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-06bpf: Implement get_current_task_btf and RET_PTR_TO_BTF_IDKP Singh1-0/+9
The currently available bpf_get_current_task returns an unsigned integer which can be used along with BPF_CORE_READ to read data from the task_struct but still cannot be used as an input argument to a helper that accepts an ARG_PTR_TO_BTF_ID of type task_struct. In order to implement this helper a new return type, RET_PTR_TO_BTF_ID, is added. This is similar to RET_PTR_TO_BTF_ID_OR_NULL but does not require checking the nullness of returned pointer. Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-06bpftool: Add support for task local storageKP Singh3-3/+6
Updates the binary to handle the BPF_MAP_TYPE_TASK_STORAGE as "task_storage" for printing and parsing. Also updates the documentation and bash completion Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-06libbpf: Add support for task local storageKP Singh1-0/+1
Updates the bpf_probe_map_type API to also support BPF_MAP_TYPE_TASK_STORAGE similar to other local storage maps. Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-06bpf: Implement task local storageKP Singh1-0/+39
Similar to bpf_local_storage for sockets and inodes add local storage for task_struct. The life-cycle of storage is managed with the life-cycle of the task_struct. i.e. the storage is destroyed along with the owning task with a callback to the bpf_task_storage_free from the task_free LSM hook. The BPF LSM allocates an __rcu pointer to the bpf_local_storage in the security blob which are now stackable and can co-exist with other LSMs. The userspace map operations can be done by using a pid fd as a key passed to the lookup, update and delete operations. Signed-off-by: KP Singh <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Acked-by: Martin KaFai Lau <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05bpf: Lift hashtab key_size limitFlorian Lehner3-1/+89
Currently key_size of hashtab is limited to MAX_BPF_STACK. As the key of hashtab can also be a value from a per cpu map it can be larger than MAX_BPF_STACK. The use-case for this patch originates to implement allow/disallow lists for files and file paths. The maximum length of file paths is defined by PATH_MAX with 4096 chars including nul. This limit exceeds MAX_BPF_STACK. Changelog: v5: - Fix cast overflow v4: - Utilize BPF skeleton in tests - Rebase v3: - Rebase v2: - Add a test for bpf side Signed-off-by: Florian Lehner <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: John Fastabend <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05bpf: Zero-fill re-used per-cpu map elementDavid Verbeiren2-0/+247
Zero-fill element values for all other cpus than current, just as when not using prealloc. This is the only way the bpf program can ensure known initial values for all cpus ('onallcpus' cannot be set when coming from the bpf program). The scenario is: bpf program inserts some elements in a per-cpu map, then deletes some (or userspace does). When later adding new elements using bpf_map_update_elem(), the bpf program can only set the value of the new elements for the current cpu. When prealloc is enabled, previously deleted elements are re-used. Without the fix, values for other cpus remain whatever they were when the re-used entry was previously freed. A selftest is added to validate correct operation in above scenario as well as in case of LRU per-cpu map element re-use. Fixes: 6c9059817432 ("bpf: pre-allocate hash map elements") Signed-off-by: David Verbeiren <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Matthieu Baerts <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05tools/bpftool: Add bpftool support for split BTFAndrii Nakryiko3-4/+21
Add ability to work with split BTF by providing extra -B flag, which allows to specify the path to the base BTF file. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05selftests/bpf: Add split BTF dedup selftestsAndrii Nakryiko3-0/+391
Add selftests validating BTF deduplication for split BTF case. Add a helper macro that allows to validate entire BTF with raw BTF dump, not just type-by-type. This saves tons of code and complexity. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Accomodate DWARF/compiler bug with duplicated identical arraysAndrii Nakryiko1-2/+25
In some cases compiler seems to generate distinct DWARF types for identical arrays within the same CU. That seems like a bug, but it's already out there and breaks type graph equivalence checks, so accommodate it anyway by checking for identical arrays, regardless of their type ID. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Support BTF dedup of split BTFsAndrii Nakryiko1-53/+168
Add support for deduplication split BTFs. When deduplicating split BTF, base BTF is considered to be immutable and can't be modified or adjusted. 99% of BTF deduplication logic is left intact (module some type numbering adjustments). There are only two differences. First, each type in base BTF gets hashed (expect VAR and DATASEC, of course, those are always considered to be self-canonical instances) and added into a table of canonical table candidates. Hashing is a shallow, fast operation, so mostly eliminates the overhead of having entire base BTF to be a part of BTF dedup. Second difference is very critical and subtle. While deduplicating split BTF types, it is possible to discover that one of immutable base BTF BTF_KIND_FWD types can and should be resolved to a full STRUCT/UNION type from the split BTF part. This is, obviously, can't happen because we can't modify the base BTF types anymore. So because of that, any type in split BTF that directly or indirectly references that newly-to-be-resolved FWD type can't be considered to be equivalent to the corresponding canonical types in base BTF, because that would result in a loss of type resolution information. So in such case, split BTF types will be deduplicated separately and will cause some duplication of type information, which is unavoidable. With those two changes, the rest of the algorithm manages to deduplicate split BTF correctly, pointing all the duplicates to their canonical counter-parts in base BTF, but also is deduplicating whatever unique types are present in split BTF on their own. Also, theoretically, split BTF after deduplication could end up with either empty type section or empty string section. This is handled by libbpf correctly in one of previous patches in the series. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Fix BTF data layout checks and allow empty BTFAndrii Nakryiko1-10/+6
Make data section layout checks stricter, disallowing overlap of types and strings data. Additionally, allow BTFs with no type data. There is nothing inherently wrong with having BTF with no types (put potentially with some strings). This could be a situation with kernel module BTFs, if module doesn't introduce any new type information. Also fix invalid offset alignment check for btf->hdr->type_off. Fixes: 8a138aed4a80 ("bpf: btf: Add BTF support to libbpf") Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05selftests/bpf: Add checking of raw type dump in BTF writer APIs selftestsAndrii Nakryiko4-1/+256
Add re-usable btf_helpers.{c,h} to provide BTF-related testing routines. Start with adding a raw BTF dumping helpers. Raw BTF dump is the most succinct and at the same time a very human-friendly way to validate exact contents of BTF types. Cross-validate raw BTF dump and writable BTF in a single selftest. Raw type dump checks also serve as a good self-documentation. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05selftests/bpf: Add split BTF basic testAndrii Nakryiko2-0/+110
Add selftest validating ability to programmatically generate and then dump split BTF. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Implement basic split BTF supportAndrii Nakryiko3-45/+169
Support split BTF operation, in which one BTF (base BTF) provides basic set of types and strings, while another one (split BTF) builds on top of base's types and strings and adds its own new types and strings. From API standpoint, the fact that the split BTF is built on top of the base BTF is transparent. Type numeration is transparent. If the base BTF had last type ID #N, then all types in the split BTF start at type ID N+1. Any type in split BTF can reference base BTF types, but not vice versa. Programmatically construction of a split BTF on top of a base BTF is supported: one can create an empty split BTF with btf__new_empty_split() and pass base BTF as an input, or pass raw binary data to btf__new_split(), or use btf__parse_xxx_split() variants to get initial set of split types/strings from the ELF file with .BTF section. String offsets are similarly transparent and are a logical continuation of base BTF's strings. When building BTF programmatically and adding a new string (explicitly with btf__add_str() or implicitly through appending new types/members), string-to-be-added would first be looked up from the base BTF's string section and re-used if it's there. If not, it will be looked up and/or added to the split BTF string section. Similarly to type IDs, types in split BTF can refer to strings from base BTF absolutely transparently (but not vice versa, of course, because base BTF doesn't "know" about existence of split BTF). Internal type index is slightly adjusted to be zero-indexed, ignoring a fake [0] VOID type. This allows to handle split/base BTF type lookups transparently by using btf->start_id type ID offset, which is always 1 for base/non-split BTF and equals btf__get_nr_types(base_btf) + 1 for the split BTF. BTF deduplication is not yet supported for split BTF and support for it will be added in separate patch. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Unify and speed up BTF string deduplicationAndrii Nakryiko1-165/+98
Revamp BTF dedup's string deduplication to match the approach of writable BTF string management. This allows to transfer deduplicated strings index back to BTF object after deduplication without expensive extra memory copying and hash map re-construction. It also simplifies the code and speeds it up, because hashmap-based string deduplication is faster than sort + unique approach. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05selftest/bpf: Relax btf_dedup test checksAndrii Nakryiko1-15/+25
Remove the requirement of a strictly exact string section contents. This used to be true when string deduplication was done through sorting, but with string dedup done through hash table, it's no longer true. So relax test harness to relax strings checks and, consequently, type checks, which now don't have to have exactly the same string offsets. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05libbpf: Factor out common operations in BTF writing APIsAndrii Nakryiko1-80/+43
Factor out commiting of appended type data. Also extract fetching the very last type in the BTF (to append members to). These two operations are common across many APIs and will be easier to refactor with split BTF, if they are extracted into a single place. Signed-off-by: Andrii Nakryiko <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05tools/bpftool: Fix attaching flow dissectorLorenz Bauer1-1/+1
My earlier patch to reject non-zero arguments to flow dissector attach broke attaching via bpftool. Instead of 0 it uses -1 for target_fd. Fix this by passing a zero argument when attaching the flow dissector. Fixes: 1b514239e859 ("bpf: flow_dissector: Check value of unused flags to BPF_PROG_ATTACH") Reported-by: Jiri Benc <[email protected]> Signed-off-by: Lorenz Bauer <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Acked-by: Song Liu <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-05Merge tag 'linux-kselftest-kunit-fixes-5.10-rc3' of ↵Linus Torvalds8-14/+37
git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest Pull Kunit fixes from Shuah Khan: "Several kunit_tool and documentation fixes" * tag 'linux-kselftest-kunit-fixes-5.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: kunit: tools: fix kunit_tool tests for parsing test plans Documentation: kunit: Update Kconfig parts for KUNIT's module support kunit: test: fix remaining kernel-doc warnings kunit: Don't fail test suites if one of them is empty kunit: Fix kunit.py --raw_output option
2020-11-05selftests: binderfs: use SKIP instead of XFAILTommi Rantala1-4/+4
XFAIL is gone since commit 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP"), use SKIP instead. Fixes: 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP") Signed-off-by: Tommi Rantala <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Christian Brauner <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-05selftests: clone3: use SKIP instead of XFAILTommi Rantala1-1/+1
XFAIL is gone since commit 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP"), use SKIP instead. Fixes: 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP") Signed-off-by: Tommi Rantala <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Christian Brauner <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-05selftests: core: use SKIP instead of XFAIL in close_range_test.cTommi Rantala1-4/+4
XFAIL is gone since commit 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP"), use SKIP instead. Fixes: 9847d24af95c ("selftests/harness: Refactor XFAIL into SKIP") Signed-off-by: Tommi Rantala <[email protected]> Reviewed-by: Kees Cook <[email protected]> Acked-by: Christian Brauner <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-05selftests: proc: fix warning: _GNU_SOURCE redefinedTommi Rantala3-3/+0
Makefile already contains -D_GNU_SOURCE, so we can remove it from the *.c files. Signed-off-by: Tommi Rantala <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2020-11-04selftests: mptcp: add ADD_ADDR timeout test caseGeliang Tang2-24/+80
This patch added the test case for retransmitting ADD_ADDR when timeout occurs. It set NS1's add_addr_timeout to 1 second, and drop NS2's ADD_ADDR echo packets. Here we need to slow down the transfer process of all data to let the ADD_ADDR suboptions can be retransmitted three times. So we added a new parameter "speed" for do_transfer, it can be set with fast or slow. We also added three new optional parameters for run_tests, and dropped run_remove_tests function. Since we added the netfilter rules in this test case, we need to update the "config" file. Suggested-by: Matthieu Baerts <[email protected]> Suggested-by: Paolo Abeni <[email protected]> Acked-by: Paolo Abeni <[email protected]> Reviewed-by: Matthieu Baerts <[email protected]> Signed-off-by: Geliang Tang <[email protected]> Signed-off-by: Mat Martineau <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 *,g auto-addNikolay Aleksandrov1-1/+30
When we have *,G ports in exclude mode and a new S,G,port is added the kernel has to automatically create an S,G entry for each exclude port to get proper forwarding. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exclude timeoutNikolay Aleksandrov1-1/+47
Test that when a group in exclude mode expires it changes mode to include and the blocked entries are deleted. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exc -> block reportNikolay Aleksandrov1-1/+30
The test checks for the following case: Router State Report Received New Router State Actions EXCLUDE (X,Y) BLOCK (A) EXCLUDE (X+(A-Y),Y) (A-X-Y) = Filter Timer Send Q(MA,A-Y) Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 inc -> block reportNikolay Aleksandrov1-1/+36
The test checks for the following case: Router State Report Received New Router State Actions INCLUDE (A) BLOCK (B) INCLUDE (A) Send Q(MA,A*B) Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exc -> to_exclude reportNikolay Aleksandrov1-1/+29
The test checks for the following case: Router State Report Received New Router State Actions EXCLUDE (X,Y) TO_EX (A) EXCLUDE (A-Y,Y*A) (A-X-Y) = Filter Timer Delete (X-A) Delete (Y-A) Send Q(MA,A-Y) Filter Timer=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exc -> is_exclude reportNikolay Aleksandrov1-1/+30
The test checks for the following case: Router State Report Received New Router State Actions EXCLUDE (X,Y) IS_EX (A) EXCLUDE (A-Y, Y*A) (A-X-Y)=MALI Delete (X-A) Delete (Y-A) Filter Timer=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exc -> is_include reportNikolay Aleksandrov1-1/+29
The test checks for the following case: Router State Report Received New Router State Actions EXCLUDE (X,Y) IS_IN (A) EXCLUDE (X+A, Y-A) (A)=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 exc -> allow reportNikolay Aleksandrov1-1/+29
The test checks for the following case: Router State Report Received New Router State Actions EXCLUDE (X,Y) ALLOW (A) EXCLUDE (X+A,Y-A) (A)=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 inc -> to_exclude reportNikolay Aleksandrov1-1/+55
The test checks for the following case: Router State Report Received New Router State Actions INCLUDE (A) TO_EX (B) EXCLUDE (A*B,B-A) (B-A)=0 Delete (A-B) Send Q(MA,A*B) Filter Timer=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 inc -> is_exclude reportNikolay Aleksandrov1-1/+53
The test checks for the following case: Router State Report Received New Router State Actions INCLUDE (A) IS_EX (B) EXCLUDE (A*B, B-A) (B-A)=0 Delete (A-B) Filter Timer=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 inc -> is_include reportNikolay Aleksandrov1-1/+28
The test checks for the following case: Router State Report Received New Router State Actions INCLUDE (A) IS_IN (B) INCLUDE (A+B) (B)=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add test for mldv2 inc -> allow reportNikolay Aleksandrov1-1/+28
The test checks for the following case: Router State Report Received New Router State Actions INCLUDE (A) ALLOW (B) INCLUDE (A+B) (B)=MALI Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: add initial MLDv2 include testNikolay Aleksandrov1-0/+146
Add the initial setup for MLDv2 tests with the first test of a simple is_include report. For MLDv2 we need to setup the bridge properly and we also send the full precooked packets instead of relying on mausezahn to fill in some parts. For verification we use the generic S,G state checking functions from lib.sh. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: factor out and rename sg state functionsNikolay Aleksandrov2-123/+123
Factor out S,G entry state checking functions for existence, forwarding, blocking and timer to lib.sh so they can be later used by MLDv2 tests. Add brmcast_ suffix to their name to make the relation to the bridge explicit. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: lib: add support for IPv6 mcast packet testNikolay Aleksandrov1-3/+11
In order to test an IPv6 multicast packet we need to pass different tc and mausezahn protocols only, so add a simple check for the destination address which decides if we should generate an IPv4 or IPv6 mcast packet. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04selftests: net: bridge: factor out mcast_packet_testNikolay Aleksandrov2-32/+32
Factor out mcast_packet_test into lib.sh so it can be later extended and reused by MLDv2 tests. Signed-off-by: Nikolay Aleksandrov <[email protected]> Signed-off-by: Jakub Kicinski <[email protected]>
2020-11-04libbpf: Fix possible use after free in xsk_socket__deleteMagnus Karlsson1-2/+4
Fix a possible use after free in xsk_socket__delete that will happen if xsk_put_ctx() frees the ctx. To fix, save the umem reference taken from the context and just use that instead. Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices") Signed-off-by: Magnus Karlsson <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-04libbpf: Fix null dereference in xsk_socket__deleteMagnus Karlsson1-1/+2
Fix a possible null pointer dereference in xsk_socket__delete that will occur if a null pointer is fed into the function. Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices") Reported-by: Andrii Nakryiko <[email protected]> Signed-off-by: Magnus Karlsson <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Acked-by: Andrii Nakryiko <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2020-11-04perf jevents: Add test for arch std eventsJohn Garry4-0/+31
Recently there was an undetected breakage for std arch event support. Add support in "PMU events" testcase to detect such breakages. For this, the "test" arch needs has support added to process std arch events. And a test event is added for the test, ifself. Also add a few code comments to help understand the code a bit better. Committer testing: Before: # perf test -vv pmu |& grep l3_cache_rd # After: # perf test -vv pmu |& grep l3_cache_rd testing event table l3_cache_rd: pass testing aliases PMU cpu: matched event l3_cache_rd # Signed-off-by: John Garry <[email protected]> Reviewed-By: Kajol Jain<[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf jevents: Tidy error handlingJohn Garry1-48/+35
There is much duplication in the error handling for directory transvering for prcessing JSONs. Factor out the common code to tidy a bit. Signed-off-by: John Garry <[email protected]> Reviewed-By: Kajol Jain<[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf trace beauty: Allow header files in a different pathNamhyung Kim2-3/+3
Current script to generate mmap flags and prot checks headers from the uapi/asm-generic directory but it might come from a different directory in some environment. So change the pattern to accept it. Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Tested-by: Arnaldo Carvalho de Melo <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf stat: Add --quiet optionAndi Kleen3-1/+10
Add a new --quiet option to 'perf stat'. This is useful with 'perf stat record' to write the data only to the perf.data file, which can lower measurement overhead because the data doesn't need to be formatted. On my 4C desktop: % time ./perf stat record -e $(python -c 'print ",".join(["cycles"]*1000)') -a -I 1000 sleep 5 ... real 0m5.377s user 0m0.238s sys 0m0.452s % time ./perf stat record --quiet -e $(python -c 'print ",".join(["cycles"]*1000)') -a -I 1000 sleep 5 real 0m5.452s user 0m0.183s sys 0m0.423s In this example it cuts the user time by 20%. On systems with more cores the savings are higher. Signed-off-by: Andi Kleen <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexey Budankov <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf stat: Support regex pattern in --for-each-cgroupNamhyung Kim3-26/+182
To make the command line even more compact with cgroups, support regex pattern matching in cgroup names. $ perf stat -a -e cpu-clock,cycles --for-each-cgroup ^foo sleep 1 3,000.73 msec cpu-clock foo # 2.998 CPUs utilized 12,530,992,699 cycles foo # 7.517 GHz (100.00%) 1,000.61 msec cpu-clock foo/bar # 1.000 CPUs utilized 4,178,529,579 cycles foo/bar # 2.506 GHz (100.00%) 1,000.03 msec cpu-clock foo/baz # 0.999 CPUs utilized 4,176,104,315 cycles foo/baz # 2.505 GHz (100.00%) 1.000892614 seconds time elapsed Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Jiri Olsa <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Ian Rogers <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf test: Use generic event for expand_libpfm_events()Namhyung Kim1-1/+1
I found that the UNHALTED_CORE_CYCLES event is only available in the Intel machines and it makes other vendors/archs fail on the test. As libpfm4 can parse the generic events like cycles, let's use them. Fixes: 40b74c30ffb9 ("perf test: Add expand cgroup event test") Signed-off-by: Namhyung Kim <[email protected]> Acked-by: Ian Rogers <[email protected]> Cc: Alexander Shishkin <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Jiri Olsa <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Stephane Eranian <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
2020-11-04perf kvm: Add kvm-stat for arm64Sergey Senozhatsky4-0/+179
Add support for 'perf kvm stat' on arm64 platform. Example: # perf kvm stat report Analyze events for all VMs, all VCPUs: VM-EXIT Samples Samples% Time% Min Time Max Time Avg time DABT_LOW 661867 98.91% 40.45% 2.19us 3364.65us 6.24us ( +- 0.34% ) IRQ 4598 0.69% 57.44% 2.89us 3397.59us 1276.27us ( +- 1.61% ) WFx 1475 0.22% 1.71% 2.22us 3388.63us 118.31us ( +- 8.69% ) IABT_LOW 1018 0.15% 0.38% 2.22us 2742.07us 38.29us ( +- 12.55% ) SYS64 180 0.03% 0.01% 2.07us 112.91us 6.57us ( +- 14.95% ) HVC64 17 0.00% 0.01% 2.19us 322.35us 42.95us ( +- 58.98% ) Total Samples:669155, Total events handled time:10216387.86us. Signed-off-by: Sergey Senozhatsky <[email protected]> Reviewed-by: Leo Yan <[email protected]> Tested-by: Leo Yan <[email protected]> Cc: John Garry <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Mathieu Poirier <[email protected]> Cc: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Will Deacon <[email protected]> Cc: [email protected] Cc: Suleiman Souhlal <[email protected]> Link: http://lore.kernel.org/lkml/[email protected] Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>