aboutsummaryrefslogtreecommitdiff
path: root/lib
AgeCommit message (Collapse)AuthorFilesLines
2023-08-21radix tree: remove unused variableArnd Bergmann1-1/+0
Recent versions of clang warn about an unused variable, though older versions saw the 'slot++' as a use and did not warn: radix-tree.c:1136:50: error: parameter 'slot' set but not used [-Werror,-Wunused-but-set-parameter] It's clearly not needed any more, so just remove it. Link: https://lkml.kernel.org/r/[email protected] Fixes: 3a08cd52c37c7 ("radix tree: Remove multiorder support") Signed-off-by: Arnd Bergmann <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Rong Tao <[email protected]> Cc: Tom Rix <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-21kunit: fix struct kunit_attr headerRae Moar1-0/+2
Add parameter descriptions to struct kunit_attr header for the parameters attr_default and print. Fixes: 39e92cb1e4a1 ("kunit: Add test attributes API structure") Reported-by: kernel test robot <[email protected]> Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/ Signed-off-by: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-19kobject: Remove redundant checks for whether ktype is NULLZhen Lei1-14/+10
When adding koject or kset, we have made sure that ktype cannot be NULL. Therefore, after adding koject or kset, there is no need to worry about ktype being NULL. Clear all ktype-related redundancy checks. Signed-off-by: Zhen Lei <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2023-08-19kobject: Add sanity check for kset->kobj.ktype in kset_register()Zhen Lei1-0/+5
When I register a kset in the following way: static struct kset my_kset; kobject_set_name(&my_kset.kobj, "my_kset"); ret = kset_register(&my_kset); A null pointer dereference exception is occurred: [ 4453.568337] Unable to handle kernel NULL pointer dereference at \ virtual address 0000000000000028 ... ... [ 4453.810361] Call trace: [ 4453.813062] kobject_get_ownership+0xc/0x34 [ 4453.817493] kobject_add_internal+0x98/0x274 [ 4453.822005] kset_register+0x5c/0xb4 [ 4453.825820] my_kobj_init+0x44/0x1000 [my_kset] ... ... Because I didn't initialize my_kset.kobj.ktype. According to the description in Documentation/core-api/kobject.rst: - A ktype is the type of object that embeds a kobject. Every structure that embeds a kobject needs a corresponding ktype. So add sanity check to make sure kset->kobj.ktype is not NULL. Signed-off-by: Zhen Lei <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2023-08-18Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+1
Cross-merge networking fixes after downstream PR. Conflicts: drivers/net/ethernet/sfc/tc.c fa165e194997 ("sfc: don't unregister flow_indr if it was never registered") 3bf969e88ada ("sfc: add MAE table machinery for conntrack table") https://lore.kernel.org/all/[email protected]/ No adjacent changes. Signed-off-by: Jakub Kicinski <[email protected]>
2023-08-18nmi_backtrace: allow excluding an arbitrary CPUDouglas Anderson1-3/+3
The APIs that allow backtracing across CPUs have always had a way to exclude the current CPU. This convenience means callers didn't need to find a place to allocate a CPU mask just to handle the common case. Let's extend the API to take a CPU ID to exclude instead of just a boolean. This isn't any more complex for the API to handle and allows the hardlockup detector to exclude a different CPU (the one it already did a trace for) without needing to find space for a CPU mask. Arguably, this new API also encourages safer behavior. Specifically if the caller wants to avoid tracing the current CPU (maybe because they already traced the current CPU) this makes it more obvious to the caller that they need to make sure that the current CPU ID can't change. [[email protected]: fix trigger_allbutcpu_cpu_backtrace() stub] Link: https://lkml.kernel.org/r/20230804065935.v4.1.Ia35521b91fc781368945161d7b28538f9996c182@changeid Signed-off-by: Douglas Anderson <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: kernel test robot <[email protected]> Cc: Lecopzer Chen <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Pingfan Liu <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18lib/bch.c: use bitrev instead of internal logicJohn Sanpe2-36/+3
Replace internal logic with separate bitrev library. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: John Sanpe <[email protected]> Cc: Bhaskar Chowdhury <[email protected]> Cc: Randy Dunlap <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18lib: error-inject: remove error checking for debugfs_create_dir()Wang Ming1-2/+0
It is expected that most callers should _ignore_ the errors return by debugfs_create_dir() in ei_debugfs_init(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Wang Ming <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18lib: remove error checking for debugfs_create_dir()Wang Ming1-3/+0
It is expected that most callers should _ignore_ the errors return by debugfs_create_dir() in err_inject_init(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Wang Ming <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18lib: replace kmap() with kmap_local_page()Sumitra Sharma1-8/+2
kmap() has been deprecated in favor of the kmap_local_page() due to high cost, restricted mapping space, the overhead of a global lock for synchronization, and making the process sleep in the absence of free slots. kmap_local_page() is faster than kmap() and offers thread-local and CPU-local mappings, take pagefaults in a local kmap region and preserves preemption by saving the mappings of outgoing tasks and restoring those of the incoming one during a context switch. The mappings are kept thread local in the functions “dmirror_do_read” and “dmirror_do_write” in test_hmm.c Therefore, replace kmap() with kmap_local_page() and use mempcy_from/to_page() to avoid open coding kmap_local_page() + memcpy() + kunmap_local(). Remove the unused variable “tmp”. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Sumitra Sharma <[email protected]> Suggested-by: Fabio M. De Francesco <[email protected]> Reviewed-by: Fabio M. De Francesco <[email protected]> Reviewed-by: Ira Weiny <[email protected]> Cc: Deepak R Varma <[email protected]> Cc: Jérôme Glisse <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: reduce resets during store setupLiam R. Howlett1-11/+26
mas_prealloc() may walk partially down the tree before finding that a split or spanning store is needed. When the write occurs, relax the logic on resetting the walk so that partial walks will not restart, but walks that have gone too far (a store that affects beyond the current node) should be restarted. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: refine mas_preallocate() node calculationsLiam R. Howlett1-1/+43
Calculate the number of nodes based on the pending write action instead of assuming the worst case. This addresses a performance regression introduced in platforms that have longer allocation timing. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: move mas_wr_end_piv() below mas_wr_extend_null()Liam R. Howlett1-16/+15
Relocate it and call mas_wr_extend_null() from within mas_wr_end_piv(). Extending the NULL may affect the end pivot value so call mas_wr_endtend_null() from within mas_wr_end_piv() to keep it all together. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: adjust node allocation on mas_rebalance()Liam R. Howlett1-1/+1
mas_rebalance() is called to rebalance an insufficient node into a single node or two sufficient nodes. The preallocation estimate is always too many in this case as the height of the tree will never grow and there is no possibility to have a three way split in this case, so revise the node allocation count. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: re-introduce entry to mas_preallocate() argumentsLiam R. Howlett1-1/+2
The current preallocation strategy is to preallocate the absolute worst-case allocation for a tree modification. The entry (or NULL) is needed to know how many nodes are needed to write to the tree. Start by adding the argument to the mas_preallocate() definition. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: add benchmarking for mas_prev()Liam R. Howlett1-0/+37
Add some benchmarking functions in testing for mas_prev(). This is useful to ensure there are no regressions added during modifications. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: add benchmarking for mas_for_eachLiam R. Howlett1-0/+39
Patch series "Reduce preallocations for maple tree", v3. Initial work on preallocations showed no regression in performance during testing, but recently some users (both on [1] and off [android] list) have reported that preallocating the worst-case number of nodes has caused some slow down. This patch set addresses the number of allocations in a few ways. During munmap() most munmap() operations will remove a single VMA, so leverage the fact that the maple tree can place a single pointer at range 0 - 0 without allocating. This is done by changing the index of the VMAs to be indexed by the count, starting at 0. Re-introduce the entry argument to mas_preallocate() so that a more intelligent guess of the node count can be made. Implement the more intelligent guess of the node count, although there is more work to be done. During development of v2 of this patch set, I also noticed that the number of nodes being allocated for a rebalance was beyond what could possibly be needed. This is addressed in patch 0008. This patch (of 15): Add a way to test the speed of mas_for_each() to the testing code. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Peng Zhang <[email protected]> Cc: Suren Baghdasaryan <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: Be more strict about lockingLiam R. Howlett1-2/+8
Use lockdep to check the write path in the maple tree holds the lock in write mode. Introduce mt_write_lock_is_held() to check if the lock is held for writing. Update the necessary checks for rcu_dereference_protected() to use the new write lock check. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Liam R. Howlett <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Oliver Sang <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: mtree_insert: fix typo in kernel-doc description of GFP flagsMike Rapoport (IBM)1-1/+1
Replace FGP_FLAGS with GFP_FLAGS Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport (IBM) <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: mtree_insert*: fix typo in kernel-doc descriptionMike Rapoport (IBM)1-2/+2
Replace "Insert and entry at a give index" with "Insert an entry at a given index" Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Mike Rapoport (IBM) <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18lib/test_meminit: allocate pages up to order MAX_ORDERAndrew Donnellan1-1/+1
test_pages() tests the page allocator by calling alloc_pages() with different orders up to order 10. However, different architectures and platforms support different maximum contiguous allocation sizes. The default maximum allocation order (MAX_ORDER) is 10, but architectures can use CONFIG_ARCH_FORCE_MAX_ORDER to override this. On platforms where this is less than 10, test_meminit() will blow up with a WARN(). This is expected, so let's not do that. Replace the hardcoded "10" with the MAX_ORDER macro so that we test allocations up to the expected platform limit. Link: https://lkml.kernel.org/r/[email protected] Fixes: 5015a300a522 ("lib: introduce test_meminit module") Signed-off-by: Andrew Donnellan <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Cc: Xiaoke Wang <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: drop mas_first_entry()Peng Zhang1-72/+0
The internal function mas_first_entry() is no longer used, so drop it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: replace mas_logical_pivot() with mas_safe_pivot()Peng Zhang1-30/+3
Replace mas_logical_pivot() with mas_safe_pivot() and drop mas_logical_pivot() since it won't be used anymore. We can do this since now all nodes will have node limit pivot (if it is not full node). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: update mt_validate()Peng Zhang1-10/+9
Instead of using mas_first_entry() to find the leftmost leaf, use a simple loop instead. Remove an unneeded check for root node. To make the error message more accurate, check pivots first and then slots, because checking slots depend on the node limit pivot to break the loop. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: make mas_validate_limits() check root node and node limitPeng Zhang1-16/+14
Update mas_validate_limits() to check root node, check node limit pivot if there is enough room for it to exist and check data_end. Remove the check for child existence as it is done in mas_validate_child_slot(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: fix mas_validate_child_slot() to check last missed slotPeng Zhang1-4/+8
Don't break the loop before checking the last slot. Also here check if non-leaf nodes are missing children. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: make mas_validate_gaps() to check metadataPeng Zhang1-36/+42
Make mas_validate_gaps() check whether the offset in the metadata points to the largest gap. By the way, simplify this function. Add the verification that gaps beyond the node limit are zero. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: don't use MAPLE_ARANGE64_META_MAX to indicate no gapPeng Zhang1-11/+2
Patch series "Improve the validation for maple tree and some cleanup", v2. This patch (of 7): Do not use a special offset to indicate that there is no gap. When there is no gap, offset can point to any valid slots because its gap is 0. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Tested-by: Geert Uytterhoeven <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: add a fast path case in mas_wr_slot_store()Peng Zhang1-12/+24
When expanding a range in two directions, only partially overwriting the previous and next ranges, the number of entries will not be increased, so we can just update the pivots as a fast path. However, it may introduce potential risks in RCU mode, because it updates two pivots. We only enable it in non-RCU mode. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: optimize mas_wr_append(), also improve duplicating VMAsPeng Zhang1-11/+22
When the new range can be completely covered by the original last range without touching the boundaries on both sides, two new entries can be appended to the end as a fast path. We update the original last pivot at the end, and the newly appended two entries will not be accessed before this, so it is also safe in RCU mode. This is useful for sequential insertion, which is what we do in dup_mmap(). Enabling BENCH_FORK in test_maple_tree and just running bench_forking() gives the following time-consuming numbers: before: after: 17,874.83 msec 15,738.38 msec It shows about a 12% performance improvement for duplicating VMAs. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: add test for mas_wr_modify() fast pathPeng Zhang1-0/+65
Patch series "Optimize the fast path of mas_store()", v4. Add fast paths for mas_wr_append() and mas_wr_slot_store() respectively. The newly added fast path of mas_wr_append() is used in fork() and how much it benefits fork() depends on how many VMAs are duplicated. Thanks Liam for the review. This patch (of 4): Add tests for all cases of mas_wr_append() and mas_wr_slot_store(). Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peng Zhang <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18maple_tree: fix a few documentation issuesThomas Gleixner1-5/+21
The documentation of mt_next() claims that it starts the search at the provided index. That's incorrect as it starts the search after the provided index. The documentation of mt_find() is slightly confusing. "Handles locking" is not really helpful as it does not explain how the "locking" works. Also the documentation of index talks about a range, while in reality the index is updated on a succesful search to the index of the found entry plus one. Fix similar issues for mt_find_after() and mt_prev(). Reword the confusing "Note: Will not return the zero entry." comment on mt_for_each() and document @__index correctly. Link: https://lkml.kernel.org/r/87ttw2n556.ffs@tglx Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Liam R. Howlett <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Shanker Donthineni <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2023-08-18bpf/tests: Enhance output on error and fix typosHelge Deller1-5/+7
If a testcase returns a wrong (unexpected) value, print the expected and returned value in hex notation in addition to the decimal notation. This is very useful in tests which bit-shift hex values left or right and helped me a lot while developing the JIT compiler for the hppa architecture. Additionally fix two typos: dowrd -> dword, tall calls -> tail calls. Signed-off-by: Helge Deller <[email protected]> Signed-off-by: Daniel Borkmann <[email protected]> Link: https://lore.kernel.org/bpf/ZN6ZAAVoWZpsD1Jf@p100
2023-08-18iov_iter: Export import_ubuf()Takashi Iwai1-0/+1
Export import_ubuf() to be used in sound subsystem for generic memory handling as Linus suggested. It's used for constructing an iov_iter of a single segment user-space copy for PCM data. Cc: Alexander Viro <[email protected]> Link: https://lore.kernel.org/r/CAHk-=wh-mUL6mp4chAc6E_UjwpPLyCPRCJK+iB4ZMD2BqjwGHA@mail.gmail.com Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Takashi Iwai <[email protected]>
2023-08-16lib: test_scanf: Add explicit type cast to result initialization in ↵Nathan Chancellor1-1/+1
test_number_prefix() A recent change in clang allows it to consider more expressions as compile time constants, which causes it to point out an implicit conversion in the scanf tests: lib/test_scanf.c:661:2: warning: implicit conversion from 'int' to 'unsigned char' changes value from -168 to 88 [-Wconstant-conversion] 661 | test_number_prefix(unsigned char, "0xA7", "%2hhx%hhx", 0, 0xa7, 2, check_uchar); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ lib/test_scanf.c:609:29: note: expanded from macro 'test_number_prefix' 609 | T result[2] = {~expect[0], ~expect[1]}; \ | ~ ^~~~~~~~~~ 1 warning generated. The result of the bitwise negation is the type of the operand after going through the integer promotion rules, so this truncation is expected but harmless, as the initial values in the result array get overwritten by _test() anyways. Add an explicit cast to the expected type in test_number_prefix() to silence the warning. There is no functional change, as all the tests still pass with GCC 13.1.0 and clang 18.0.0. Cc: [email protected] Link: https://github.com/ClangBuiltLinux/linuxq/issues/1899 Link: https://github.com/llvm/llvm-project/commit/610ec954e1f81c0e8fcadedcd25afe643f5a094e Suggested-by: Nick Desaulniers <[email protected]> Signed-off-by: Nathan Chancellor <[email protected]> Reviewed-by: Petr Mladek <[email protected]> Signed-off-by: Petr Mladek <[email protected]> Link: https://lore.kernel.org/r/20230807-test_scanf-wconstant-conversion-v2-1-839ca39083e1@kernel.org
2023-08-15hardening: Move BUG_ON_DATA_CORRUPTION to hardening optionsMarco Elver1-11/+1
BUG_ON_DATA_CORRUPTION is turning detected corruptions of list data structures from WARNings into BUGs. This can be useful to stop further corruptions or even exploitation attempts. However, the option has less to do with debugging than with hardening. With the introduction of LIST_HARDENED, it makes more sense to move it to the hardening options, where it selects LIST_HARDENED instead. Without this change, combining BUG_ON_DATA_CORRUPTION with LIST_HARDENED alone wouldn't be possible, because DEBUG_LIST would always be selected by BUG_ON_DATA_CORRUPTION. Signed-off-by: Marco Elver <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
2023-08-15list: Introduce CONFIG_LIST_HARDENEDMarco Elver3-4/+12
Numerous production kernel configs (see [1, 2]) are choosing to enable CONFIG_DEBUG_LIST, which is also being recommended by KSPP for hardened configs [3]. The motivation behind this is that the option can be used as a security hardening feature (e.g. CVE-2019-2215 and CVE-2019-2025 are mitigated by the option [4]). The feature has never been designed with performance in mind, yet common list manipulation is happening across hot paths all over the kernel. Introduce CONFIG_LIST_HARDENED, which performs list pointer checking inline, and only upon list corruption calls the reporting slow path. To generate optimal machine code with CONFIG_LIST_HARDENED: 1. Elide checking for pointer values which upon dereference would result in an immediate access fault (i.e. minimal hardening checks). The trade-off is lower-quality error reports. 2. Use the __preserve_most function attribute (available with Clang, but not yet with GCC) to minimize the code footprint for calling the reporting slow path. As a result, function size of callers is reduced by avoiding saving registers before calling the rarely called reporting slow path. Note that all TUs in lib/Makefile already disable function tracing, including list_debug.c, and __preserve_most's implied notrace has no effect in this case. 3. Because the inline checks are a subset of the full set of checks in __list_*_valid_or_report(), always return false if the inline checks failed. This avoids redundant compare and conditional branch right after return from the slow path. As a side-effect of the checks being inline, if the compiler can prove some condition to always be true, it can completely elide some checks. Since DEBUG_LIST is functionally a superset of LIST_HARDENED, the Kconfig variables are changed to reflect that: DEBUG_LIST selects LIST_HARDENED, whereas LIST_HARDENED itself has no dependency on DEBUG_LIST. Running netperf with CONFIG_LIST_HARDENED (using a Clang compiler with "preserve_most") shows throughput improvements, in my case of ~7% on average (up to 20-30% on some test cases). Link: https://r.android.com/1266735 [1] Link: https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config [2] Link: https://kernsec.org/wiki/index.php/Kernel_Self_Protection_Project/Recommended_Settings [3] Link: https://googleprojectzero.blogspot.com/2019/11/bad-binder-android-in-wild-exploit.html [4] Signed-off-by: Marco Elver <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
2023-08-15list_debug: Introduce inline wrappers for debug checksMarco Elver1-6/+5
Turn the list debug checking functions __list_*_valid() into inline functions that wrap the out-of-line functions. Care is taken to ensure the inline wrappers are always inlined, so that additional compiler instrumentation (such as sanitizers) does not result in redundant outlining. This change is preparation for performing checks in the inline wrappers. No functional change intended. Signed-off-by: Marco Elver <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]>
2023-08-15raid6: test: only check for Altivec if building on powerpc hostsWANG Xuerui1-9/+10
Altivec is only available for powerpc hosts, so only check for its availability when the host is powerpc, to avoid error messages being shown on architectures other than x86, arm or powerpc. Signed-off-by: WANG Xuerui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Song Liu <[email protected]>
2023-08-15raid6: test: make sure all intermediate and artifact files are .gitignoredWANG Xuerui1-0/+3
Currently when the raid6test utility is built, the resulting binary and an int.uc file are not being ignored, which can get inadvertently committed as a result when one works on the raid6 code. Ignore them to make `git status` clean at all times. Signed-off-by: WANG Xuerui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Song Liu <[email protected]>
2023-08-15raid6: test: cosmetic cleanups for the test MakefileWANG Xuerui1-15/+16
Use tabs/spaces consistently: hard tabs for marking recipe lines only, spaces for everything else. Also, the OPTFLAGS declaration actually included the tabs preceding the line comment, making compiler invocation lines unnecessarily long. As the entire block of declarations are meant for ad-hoc customization (otherwise they would probably make use of `?=` instead of `=`), move the "Adjust as desired" comment above the block too to fix the long invocation lines. Signed-off-by: WANG Xuerui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Song Liu <[email protected]>
2023-08-15raid6: guard the tables.c include of <linux/export.h> with __KERNEL__WANG Xuerui1-0/+2
The export directives for the tables are already emitted with __KERNEL__ guards, but the <linux/export.h> include is not, causing errors when building the raid6test program. Guard this include too to fix the raid6test build. Signed-off-by: WANG Xuerui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Song Liu <[email protected]>
2023-08-15raid6: remove the <linux/export.h> include from recov.cWANG Xuerui1-1/+0
There is no exported symbol left in recov.c, so the include is now unnecessary, and breaks the raid6test build. Remove it. Signed-off-by: WANG Xuerui <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Song Liu <[email protected]>
2023-08-13Merge 6.5-rc6 into char-misc-nextGreg Kroah-Hartman5-7/+16
We need the char/misc fixes in here as well to build on top of. Signed-off-by: Greg Kroah-Hartman <[email protected]>
2023-08-11Merge tag 'mm-hotfixes-stable-2023-08-11-13-44' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "14 hotfixes. 11 of these are cc:stable and the remainder address post-6.4 issues, or are not considered suitable for -stable backporting" * tag 'mm-hotfixes-stable-2023-08-11-13-44' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: mm/damon/core: initialize damo_filter->list from damos_new_filter() nilfs2: fix use-after-free of nilfs_root in dirtying inodes via iput selftests: cgroup: fix test_kmem_basic false positives fs/proc/kcore: reinstate bounce buffer for KCORE_TEXT regions MAINTAINERS: add maple tree mailing list mm: compaction: fix endless looping over same migrate block selftests: mm: ksm: fix incorrect evaluation of parameter hugetlb: do not clear hugetlb dtor until allocating vmemmap mm: memory-failure: avoid false hwpoison page mapped error info mm: memory-failure: fix potential unexpected return value from unpoison_memory() mm/swapfile: fix wrong swap entry type for hwpoisoned swapcache page radix tree test suite: fix incorrect allocation size for pthreads crypto, cifs: fix error handling in extract_iter_to_sg() zsmalloc: fix races between modifications of fullness and isolated
2023-08-11crypto: lib/mpi - avoid null pointer deref in mpi_cmp_ui()Mark O'Donovan1-2/+6
During NVMeTCP Authentication a controller can trigger a kernel oops by specifying the 8192 bit Diffie Hellman group and passing a correctly sized, but zeroed Diffie Hellamn value. mpi_cmp_ui() was detecting this if the second parameter was 0, but 1 is passed from dh_is_pubkey_valid(). This causes the null pointer u->d to be dereferenced towards the end of mpi_cmp_ui() Signed-off-by: Mark O'Donovan <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2023-08-11crypto: lib - Move mpi into lib/cryptoHerbert Xu28-1/+2
As lib/mpi is mostly used by crypto code, move it under lib/crypto so that patches touching it get directed to the right mailing list. Signed-off-by: Herbert Xu <[email protected]> Reviewed-by: Mimi Zohar <[email protected]> Signed-off-by: Herbert Xu <[email protected]>
2023-08-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-1/+1
Cross-merge networking fixes after downstream PR. No conflicts. Adjacent changes: drivers/net/ethernet/intel/igc/igc_main.c 06b412589eef ("igc: Add lock to safeguard global Qbv variables") d3750076d464 ("igc: Add TransmissionOverrun counter") drivers/net/ethernet/microsoft/mana/mana_en.c a7dfeda6fdec ("net: mana: Fix MANA VF unload when hardware is unresponsive") a9ca9f9ceff3 ("page_pool: split types and declarations from page_pool.h") 92272ec4107e ("eth: add missing xdp.h includes in drivers") net/mptcp/protocol.h 511b90e39250 ("mptcp: fix disconnect vs accept race") b8dc6d6ce931 ("mptcp: fix rcv buffer auto-tuning") tools/testing/selftests/net/mptcp/mptcp_join.sh c8c101ae390a ("selftests: mptcp: join: fix 'implicit EP' test") 03668c65d153 ("selftests: mptcp: join: rework detailed report") Signed-off-by: Jakub Kicinski <[email protected]>
2023-08-08kunit: Allow kunit test modules to use test filteringJanusz Krzysztofik3-27/+55
External tools, e.g., Intel GPU tools (IGT), support execution of individual selftests provided by kernel modules. That could be also applicable to kunit test modules if they provided test filtering. But test filtering is now possible only when kunit code is built into the kernel. Moreover, a filter can be specified only at boot time, then reboot is required each time a different filter is needed. Build the test filtering code also when kunit is configured as a module, expose test filtering functions to other kunit source files, and use them in kunit module notifier callback functions. Userspace can then reload the kunit module with a value of the filter_glob parameter tuned to a specific kunit test module every time it wants to limit the scope of tests executed on that module load. Make the kunit.filter* parameters visible in sysfs for user convenience. v5: Refresh on tpp of attributes filtering fix v4: Refresh on top of newly applied attributes patches and changes introdced by new versions of other patches submitted in series with this one. v3: Fix CONFIG_GLOB, required by filtering functions, not selected when building as a module ([email protected]). v2: Fix new name of a structure moved to kunit namespace not updated across all uses ([email protected]). Signed-off-by: Janusz Krzysztofik <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>
2023-08-08kunit: Make 'list' action available to kunit test modulesJanusz Krzysztofik2-14/+32
Results from kunit tests reported via dmesg may be interleaved with other kernel messages. When parsing dmesg for modular kunit results in real time, external tools, e.g., Intel GPU tools (IGT), may want to insert their own test name markers into dmesg at the start of each test, before any kernel message related to that test appears there, so existing upper level test result parsers have no doubt which test to blame for a specific kernel message. Unfortunately, kunit reports names of tests only at their completion (with the exeption of a not standarized "# Subtest: <name>" header above a test plan of each test suite or parametrized test). External tools could be able to insert their own "start of the test" markers with test names included if they new those names in advance. Test names could be learned from a list if provided by a kunit test module. There exists a feature of listing kunit tests without actually executing them, but it is now limited to configurations with the kunit module built in and covers only built-in tests, already available at boot time. Moreover, switching from list to normal mode requires reboot. If that feature was also available when kunit is built as a module, userspace could load the module with action=list parameter, load some kunit test modules they are interested in and learn about the list of tests provided by those modules, then unload them, reload the kunit module in normal mode and execute the tests with their lists already known. Extend kunit module notifier initialization callback with a processing path for only listing the tests provided by a module if the kunit action parameter is set to "list" or "list_attr". For user convenience, make the kunit.action parameter visible in sysfs. v2: Don't use a different format, use kunit_exec_list_tests() (Rae), - refresh on top of new attributes patches, handle newly introduced kunit.action=list_attr case (Rae). Signed-off-by: Janusz Krzysztofik <[email protected]> Cc: Rae Moar <[email protected]> Reviewed-by: David Gow <[email protected]> Signed-off-by: Shuah Khan <[email protected]>