aboutsummaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)AuthorFilesLines
2021-11-06include/linux/io-mapping.h: remove fallback for writecombineLucas De Marchi1-6/+0
The fallback was introduced in commit 80c33624e472 ("io-mapping: Fixup for different names of writecombine") to fix the build on microblaze. 5 years later, it seems all archs now provide a pgprot_writecombine(), so just remove the other possible fallbacks. For microblaze, pgprot_writecombine() is available since commit 97ccedd793ac ("microblaze: Provide pgprot_device/writecombine macros for nommu"). This is build-tested on microblaze with a hack to always build mm/io-mapping.o and without DIYing on an x86-only macro (_PAGE_CACHE_MASK) Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Lucas De Marchi <[email protected]> Cc: Chris Wilson <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Joonas Lahtinen <[email protected]> Cc: Peter Zijlstra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06memory: remove unused CONFIG_MEM_BLOCK_SIZELukas Bulwahn1-1/+0
Commit 3947be1969a9 ("[PATCH] memory hotplug: sysfs and add/remove functions") defines CONFIG_MEM_BLOCK_SIZE, but this has never been utilized anywhere. It is a good practice to keep the CONFIG_* defines exclusively for the Kbuild system. So, drop this unused definition. This issue was noticed due to running ./scripts/checkkconfigsymbols.py. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Lukas Bulwahn <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Dave Hansen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: add zap_skip_check_mapping() helperPeter Xu1-1/+15
Use the helper for the checks. Rename "check_mapping" into "zap_mapping" because "check_mapping" looks like a bool but in fact it stores the mapping itself. When it's set, we check the mapping (it must be non-NULL). When it's cleared we skip the check, which works like the old way. Move the duplicated comments to the helper too. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Xu <[email protected]> Reviewed-by: Alistair Popple <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Axel Rasmussen <[email protected]> Cc: David Hildenbrand <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: "Kirill A . Shutemov" <[email protected]> Cc: Liam Howlett <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Miaohe Lin <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: drop first_index/last_index in zap_detailsPeter Xu1-2/+0
The first_index/last_index parameters in zap_details are actually only used in unmap_mapping_range_tree(). At the meantime, this function is only called by unmap_mapping_pages() once. Instead of passing these two variables through the whole stack of page zapping code, remove them from zap_details and let them simply be parameters of unmap_mapping_range_tree(), which is inlined. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Peter Xu <[email protected]> Reviewed-by: Alistair Popple <[email protected]> Reviewed-by: David Hildenbrand <[email protected]> Reviewed-by: Liam Howlett <[email protected]> Acked-by: Hugh Dickins <[email protected]> Cc: Andrea Arcangeli <[email protected]> Cc: Axel Rasmussen <[email protected]> Cc: Jerome Glisse <[email protected]> Cc: "Kirill A . Shutemov" <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Miaohe Lin <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Yang Shi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: use __pfn_to_section() instead of open coding itRolf Eike Beer1-2/+2
It is defined in the same file just a few lines above. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Rolf Eike Beer <[email protected]> Reviewed-by: Andrew Morton <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: memcontrol: remove the kmem statesMuchun Song1-7/+0
Now the kmem states is only used to indicate whether the kmem is offline. However, we can set ->kmemcg_id to -1 to indicate whether the kmem is offline. Finally, we can remove the kmem states to simplify the code. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Muchun Song <[email protected]> Acked-by: Roman Gushchin <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Shakeel Butt <[email protected]> Cc: Matthew Wilcox (Oracle) <[email protected]> Cc: Johannes Weiner <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: simplify bdi refcountingChristoph Hellwig1-0/+3
Move grabbing and releasing the bdi refcount out of the common wb_init/wb_exit helpers into code that is only used for the non-default memcg driven bdi_writeback structures. [[email protected]: add comment] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: fix typo] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Miquel Raynal <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Vignesh Raghavendra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06fs: explicitly unregister per-superblock BDIsChristoph Hellwig1-0/+1
Add a new SB_I_ flag to mark superblocks that have an ephemeral bdi associated with them, and unregister it when the superblock is shut down. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Miquel Raynal <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Vignesh Raghavendra <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06percpu: add __alloc_size attributes for better bounds checkingKees Cook1-3/+3
As already done in GrapheneOS, add the __alloc_size attribute for appropriate percpu allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Note that due to the implementation of the percpu API, this is unlikely to ever actually provide compile-time checking beyond very simple non-SMP builds. But, since they are technically allocators, mark them as such. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Co-developed-by: Daniel Micay <[email protected]> Signed-off-by: Daniel Micay <[email protected]> Acked-by: Dennis Zhou <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm/page_alloc: add __alloc_size attributes for better bounds checkingKees Cook1-2/+2
As already done in GrapheneOS, add the __alloc_size attribute for appropriate page allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Co-developed-by: Daniel Micay <[email protected]> Signed-off-by: Daniel Micay <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm/vmalloc: add __alloc_size attributes for better bounds checkingKees Cook1-11/+11
As already done in GrapheneOS, add the __alloc_size attribute for appropriate vmalloc allocator interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Co-developed-by: Daniel Micay <[email protected]> Signed-off-by: Daniel Micay <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm/kvmalloc: add __alloc_size attributes for better bounds checkingKees Cook1-8/+8
As already done in GrapheneOS, add the __alloc_size attribute for regular kvmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Co-developed-by: Daniel Micay <[email protected]> Signed-off-by: Daniel Micay <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06slab: add __alloc_size attributes for better bounds checkingKees Cook1-28/+33
As already done in GrapheneOS, add the __alloc_size attribute for regular kmalloc interfaces, to provide additional hinting for better bounds checking, assisting CONFIG_FORTIFY_SOURCE and other compiler optimizations. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Co-developed-by: Daniel Micay <[email protected]> Signed-off-by: Daniel Micay <[email protected]> Reviewed-by: Nick Desaulniers <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06slab: clean up function prototypesKees Cook1-34/+34
Based on feedback from Joe Perches and Linus Torvalds, regularize the slab function prototypes before making attribute changes. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Daniel Micay <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: Joe Perches <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Matt Porter <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Souptick Joarder <[email protected]> Cc: Tejun Heo <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06Compiler Attributes: add __alloc_size() for better bounds checkingKees Cook3-0/+30
GCC and Clang can use the "alloc_size" attribute to better inform the results of __builtin_object_size() (for compile-time constant values). Clang can additionally use alloc_size to inform the results of __builtin_dynamic_object_size() (for run-time values). Because GCC sees the frequent use of struct_size() as an allocator size argument, and notices it can return SIZE_MAX (the overflow indication), it complains about these call sites overflowing (since SIZE_MAX is greater than the default -Walloc-size-larger-than=PTRDIFF_MAX). This isn't helpful since we already know a SIZE_MAX will be caught at run-time (this was an intentional design). To deal with this, we must disable this check as it is both a false positive and redundant. (Clang does not have this warning option.) Unfortunately, just checking the -Wno-alloc-size-larger-than is not sufficient to make the __alloc_size attribute behave correctly under older GCC versions. The attribute itself must be disabled in those situations too, as there appears to be no way to reliably silence the SIZE_MAX constant expression cases for GCC versions less than 9.1: In file included from ./include/linux/resource_ext.h:11, from ./include/linux/pci.h:40, from drivers/net/ethernet/intel/ixgbe/ixgbe.h:9, from drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c:4: In function 'kmalloc_node', inlined from 'ixgbe_alloc_q_vector' at ./include/linux/slab.h:743:9: ./include/linux/slab.h:618:9: error: argument 1 value '18446744073709551615' exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=] return __kmalloc_node(size, flags, node); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./include/linux/slab.h: In function 'ixgbe_alloc_q_vector': ./include/linux/slab.h:455:7: note: in a call to allocation function '__kmalloc_node' declared here void *__kmalloc_node(size_t size, gfp_t flags, int node) __assume_slab_alignment __malloc; ^~~~~~~~~~~~~~ Specifically: '-Wno-alloc-size-larger-than' is not correctly handled by GCC < 9.1 https://godbolt.org/z/hqsfG7q84 (doesn't disable) https://godbolt.org/z/P9jdrPTYh (doesn't admit to not knowing about option) https://godbolt.org/z/465TPMWKb (only warns when other warnings appear) '-Walloc-size-larger-than=18446744073709551615' is not handled by GCC < 8.2 https://godbolt.org/z/73hh1EPxz (ignores numeric value) Since anything marked with __alloc_size would also qualify for marking with __malloc, just include __malloc along with it to avoid redundant markings. (Suggested by Linus Torvalds.) Finally, make sure checkpatch.pl doesn't get confused about finding the __alloc_size attribute on functions. (Thanks to Joe Perches.) Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Kees Cook <[email protected]> Tested-by: Randy Dunlap <[email protected]> Cc: Andy Whitcroft <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Daniel Micay <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dennis Zhou <[email protected]> Cc: Dwaipayan Ray <[email protected]> Cc: Joe Perches <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Lukas Bulwahn <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Alexandre Bounine <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Ira Weiny <[email protected]> Cc: Jing Xiangfeng <[email protected]> Cc: John Hubbard <[email protected]> Cc: kernel test robot <[email protected]> Cc: Matt Porter <[email protected]> Cc: Miguel Ojeda <[email protected]> Cc: Nathan Chancellor <[email protected]> Cc: Nick Desaulniers <[email protected]> Cc: Souptick Joarder <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06kasan: generic: introduce kasan_record_aux_stack_noalloc()Marco Elver1-0/+2
Introduce a variant of kasan_record_aux_stack() that does not do any memory allocation through stackdepot. This will permit using it in contexts that cannot allocate any memory. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Tested-by: Shuah Khan <[email protected]> Acked-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Taras Madan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: Walter Wu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06lib/stackdepot: introduce __stack_depot_save()Marco Elver1-0/+4
Add __stack_depot_save(), which provides more fine-grained control over stackdepot's memory allocation behaviour, in case stackdepot runs out of "stack slabs". Normally stackdepot uses alloc_pages() in case it runs out of space; passing can_alloc==false to __stack_depot_save() prohibits this, at the cost of more likely failure to record a stack trace. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Tested-by: Shuah Khan <[email protected]> Acked-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Taras Madan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: Walter Wu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06lib/stackdepot: include gfp.hMarco Elver1-0/+2
Patch series "stackdepot, kasan, workqueue: Avoid expanding stackdepot slabs when holding raw_spin_lock", v2. Shuah Khan reported [1]: | When CONFIG_PROVE_RAW_LOCK_NESTING=y and CONFIG_KASAN are enabled, | kasan_record_aux_stack() runs into "BUG: Invalid wait context" when | it tries to allocate memory attempting to acquire spinlock in page | allocation code while holding workqueue pool raw_spinlock. | | There are several instances of this problem when block layer tries | to __queue_work(). Call trace from one of these instances is below: | | kblockd_mod_delayed_work_on() | mod_delayed_work_on() | __queue_delayed_work() | __queue_work() (rcu_read_lock, raw_spin_lock pool->lock held) | insert_work() | kasan_record_aux_stack() | kasan_save_stack() | stack_depot_save() | alloc_pages() | __alloc_pages() | get_page_from_freelist() | rm_queue() | rm_queue_pcplist() | local_lock_irqsave(&pagesets.lock, flags); | [ BUG: Invalid wait context triggered ] PROVE_RAW_LOCK_NESTING is pointing out that (on RT kernels) the locking rules are being violated. More generally, memory is being allocated from a non-preemptive context (raw_spin_lock'd c-s) where it is not allowed. To properly fix this, we must prevent stackdepot from replenishing its "stack slab" pool if memory allocations cannot be done in the current context: it's a bug to use either GFP_ATOMIC nor GFP_NOWAIT in certain non-preemptive contexts, including raw_spin_locks (see gfp.h and commit ab00db216c9c7). The only downside is that saving a stack trace may fail if: stackdepot runs out of space AND the same stack trace has not been recorded before. I expect this to be unlikely, and a simple experiment (boot the kernel) didn't result in any failure to record stack trace from insert_work(). The series includes a few minor fixes to stackdepot that I noticed in preparing the series. It then introduces __stack_depot_save(), which exposes the option to force stackdepot to not allocate any memory. Finally, KASAN is changed to use the new stackdepot interface and provide kasan_record_aux_stack_noalloc(), which is then used by workqueue code. [1] https://lkml.kernel.org/r/[email protected] This patch (of 6): <linux/stackdepot.h> refers to gfp_t, but doesn't include gfp.h. Fix it by including <linux/gfp.h>. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Tested-by: Shuah Khan <[email protected]> Acked-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Walter Wu <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Taras Madan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: don't include <linux/dax.h> in <linux/mempolicy.h>Christoph Hellwig1-1/+0
Not required at all, and having this causes a huge kernel rebuild as soon as something in dax.h changes. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Christoph Hellwig <[email protected]> Reviewed-by: Naoya Horiguchi <[email protected]> Reviewed-by: Dan Williams <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm, slub: change percpu partial accounting from objects to pagesVlastimil Babka2-13/+2
With CONFIG_SLUB_CPU_PARTIAL enabled, SLUB keeps a percpu list of partial slabs that can be promoted to cpu slab when the previous one is depleted, without accessing the shared partial list. A slab can be added to this list by 1) refill of an empty list from get_partial_node() - once we really have to access the shared partial list, we acquire multiple slabs to amortize the cost of locking, and 2) first free to a previously full slab - instead of putting the slab on a shared partial list, we can more cheaply freeze it and put it on the per-cpu list. To control how large a percpu partial list can grow for a kmem cache, set_cpu_partial() calculates a target number of free objects on each cpu's percpu partial list, and this can be also set by the sysfs file cpu_partial. However, the tracking of actual number of objects is imprecise, in order to limit overhead from cpu X freeing an objects to a slab on percpu partial list of cpu Y. Basically, the percpu partial slabs form a single linked list, and when we add a new slab to the list with current head "oldpage", we set in the struct page of the slab we're adding: page->pages = oldpage->pages + 1; // this is precise page->pobjects = oldpage->pobjects + (page->objects - page->inuse); page->next = oldpage; Thus the real number of free objects in the slab (objects - inuse) is only determined at the moment of adding the slab to the percpu partial list, and further freeing doesn't update the pobjects counter nor propagate it to the current list head. As Jann reports [1], this can easily lead to large inaccuracies, where the target number of objects (up to 30 by default) can translate to the same number of (empty) slab pages on the list. In case 2) above, we put a slab with 1 free object on the list, thus only increase page->pobjects by 1, even if there are subsequent frees on the same slab. Jann has noticed this in practice and so did we [2] when investigating significant increase of kmemcg usage after switching from SLAB to SLUB. While this is no longer a problem in kmemcg context thanks to the accounting rewrite in 5.9, the memory waste is still not ideal and it's questionable whether it makes sense to perform free object count based control when object counts can easily become so much inaccurate. So this patch converts the accounting to be based on number of pages only (which is precise) and removes the page->pobjects field completely. This is also ultimately simpler. To retain the existing set_cpu_partial() heuristic, first calculate the target number of objects as previously, but then convert it to target number of pages by assuming the pages will be half-filled on average. This assumption might obviously also be inaccurate in practice, but cannot degrade to actual number of pages being equal to the target number of objects. We could also skip the intermediate step with target number of objects and rewrite the heuristic in terms of pages. However we still have the sysfs file cpu_partial which uses number of objects and could break existing users if it suddenly becomes number of pages, so this patch doesn't do that. In practice, after this patch the heuristics limit the size of percpu partial list up to 2 pages. In case of a reported regression (which would mean some workload has benefited from the previous imprecise object based counting), we can tune the heuristics to get a better compromise within the new scheme, while still avoid the unexpectedly long percpu partial lists. [1] https://lore.kernel.org/linux-mm/CAG48ez2Qx5K1Cab-m8BdSibp6wLTip6ro4=-umR7BLsEgjEYzA@mail.gmail.com/ [2] https://lore.kernel.org/all/[email protected]/ ========== Evaluation ========== Mel was kind enough to run v1 through mmtests machinery for netperf (localhost) and hackbench and, for most significant results see below. So there are some apparent regressions, especially with hackbench, which I think ultimately boils down to having shorter percpu partial lists on average and some benchmarks benefiting from longer ones. Monitoring slab usage also indicated less memory usage by slab. Based on that, the following patch will bump the defaults to allow longer percpu partial lists than after this patch. However the goal is certainly not such that we would limit the percpu partial lists to 30 pages just because previously a specific alloc/free pattern could lead to the limit of 30 objects translate to a limit to 30 pages - that would make little sense. This is a correctness patch, and if a workload benefits from larger lists, the sysfs tuning knobs are still there to allow that. Netperf 2-socket Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (20 cores, 40 threads per socket), 384GB RAM TCP-RR: hmean before 127045.79 after 121092.94 (-4.69%, worse) stddev before 2634.37 after 1254.08 UDP-RR: hmean before 166985.45 after 160668.94 ( -3.78%, worse) stddev before 4059.69 after 1943.63 2-socket Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz (20 cores, 40 threads per socket), 512GB RAM TCP-RR: hmean before 84173.25 after 76914.72 ( -8.62%, worse) UDP-RR: hmean before 93571.12 after 96428.69 ( 3.05%, better) stddev before 23118.54 after 16828.14 2-socket Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (12 cores, 24 threads per socket), 64GB RAM TCP-RR: hmean before 49984.92 after 48922.27 ( -2.13%, worse) stddev before 6248.15 after 4740.51 UDP-RR: hmean before 61854.31 after 68761.81 ( 11.17%, better) stddev before 4093.54 after 5898.91 other machines - within 2% Hackbench (results before and after the patch, negative % means worse) 2-socket AMD EPYC 7713 (64 cores, 128 threads per core), 256GB RAM hackbench-process-sockets Amean 1 0.5380 0.5583 ( -3.78%) Amean 4 0.7510 0.8150 ( -8.52%) Amean 7 0.7930 0.9533 ( -20.22%) Amean 12 0.7853 1.1313 ( -44.06%) Amean 21 1.1520 1.4993 ( -30.15%) Amean 30 1.6223 1.9237 ( -18.57%) Amean 48 2.6767 2.9903 ( -11.72%) Amean 79 4.0257 5.1150 ( -27.06%) Amean 110 5.5193 7.4720 ( -35.38%) Amean 141 7.2207 9.9840 ( -38.27%) Amean 172 8.4770 12.1963 ( -43.88%) Amean 203 9.6473 14.3137 ( -48.37%) Amean 234 11.3960 18.7917 ( -64.90%) Amean 265 13.9627 22.4607 ( -60.86%) Amean 296 14.9163 26.0483 ( -74.63%) hackbench-thread-sockets Amean 1 0.5597 0.5877 ( -5.00%) Amean 4 0.7913 0.8960 ( -13.23%) Amean 7 0.8190 1.0017 ( -22.30%) Amean 12 0.9560 1.1727 ( -22.66%) Amean 21 1.7587 1.5660 ( 10.96%) Amean 30 2.4477 1.9807 ( 19.08%) Amean 48 3.4573 3.0630 ( 11.41%) Amean 79 4.7903 5.1733 ( -8.00%) Amean 110 6.1370 7.4220 ( -20.94%) Amean 141 7.5777 9.2617 ( -22.22%) Amean 172 9.2280 11.0907 ( -20.18%) Amean 203 10.2793 13.3470 ( -29.84%) Amean 234 11.2410 17.1070 ( -52.18%) Amean 265 12.5970 23.3323 ( -85.22%) Amean 296 17.1540 24.2857 ( -41.57%) 2-socket Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (20 cores, 40 threads per socket), 384GB RAM hackbench-process-sockets Amean 1 0.5760 0.4793 ( 16.78%) Amean 4 0.9430 0.9707 ( -2.93%) Amean 7 1.5517 1.8843 ( -21.44%) Amean 12 2.4903 2.7267 ( -9.49%) Amean 21 3.9560 4.2877 ( -8.38%) Amean 30 5.4613 5.8343 ( -6.83%) Amean 48 8.5337 9.2937 ( -8.91%) Amean 79 14.0670 15.2630 ( -8.50%) Amean 110 19.2253 21.2467 ( -10.51%) Amean 141 23.7557 25.8550 ( -8.84%) Amean 172 28.4407 29.7603 ( -4.64%) Amean 203 33.3407 33.9927 ( -1.96%) Amean 234 38.3633 39.1150 ( -1.96%) Amean 265 43.4420 43.8470 ( -0.93%) Amean 296 48.3680 48.9300 ( -1.16%) hackbench-thread-sockets Amean 1 0.6080 0.6493 ( -6.80%) Amean 4 1.0000 1.0513 ( -5.13%) Amean 7 1.6607 2.0260 ( -22.00%) Amean 12 2.7637 2.9273 ( -5.92%) Amean 21 5.0613 4.5153 ( 10.79%) Amean 30 6.3340 6.1140 ( 3.47%) Amean 48 9.0567 9.5577 ( -5.53%) Amean 79 14.5657 15.7983 ( -8.46%) Amean 110 19.6213 21.6333 ( -10.25%) Amean 141 24.1563 26.2697 ( -8.75%) Amean 172 28.9687 30.2187 ( -4.32%) Amean 203 33.9763 34.6970 ( -2.12%) Amean 234 38.8647 39.3207 ( -1.17%) Amean 265 44.0813 44.1507 ( -0.16%) Amean 296 49.2040 49.4330 ( -0.47%) 2-socket Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz (20 cores, 40 threads per socket), 512GB RAM hackbench-process-sockets Amean 1 0.5027 0.5017 ( 0.20%) Amean 4 1.1053 1.2033 ( -8.87%) Amean 7 1.8760 2.1820 ( -16.31%) Amean 12 2.9053 3.1810 ( -9.49%) Amean 21 4.6777 4.9920 ( -6.72%) Amean 30 6.5180 6.7827 ( -4.06%) Amean 48 10.0710 10.5227 ( -4.48%) Amean 79 16.4250 17.5053 ( -6.58%) Amean 110 22.6203 24.4617 ( -8.14%) Amean 141 28.0967 31.0363 ( -10.46%) Amean 172 34.4030 36.9233 ( -7.33%) Amean 203 40.5933 43.0850 ( -6.14%) Amean 234 46.6477 48.7220 ( -4.45%) Amean 265 53.0530 53.9597 ( -1.71%) Amean 296 59.2760 59.9213 ( -1.09%) hackbench-thread-sockets Amean 1 0.5363 0.5330 ( 0.62%) Amean 4 1.1647 1.2157 ( -4.38%) Amean 7 1.9237 2.2833 ( -18.70%) Amean 12 2.9943 3.3110 ( -10.58%) Amean 21 4.9987 5.1880 ( -3.79%) Amean 30 6.7583 7.0043 ( -3.64%) Amean 48 10.4547 10.8353 ( -3.64%) Amean 79 16.6707 17.6790 ( -6.05%) Amean 110 22.8207 24.4403 ( -7.10%) Amean 141 28.7090 31.0533 ( -8.17%) Amean 172 34.9387 36.8260 ( -5.40%) Amean 203 41.1567 43.0450 ( -4.59%) Amean 234 47.3790 48.5307 ( -2.43%) Amean 265 53.9543 54.6987 ( -1.38%) Amean 296 60.0820 60.2163 ( -0.22%) 1-socket Intel(R) Xeon(R) CPU E3-1240 v5 @ 3.50GHz (4 cores, 8 threads), 32 GB RAM hackbench-process-sockets Amean 1 1.4760 1.5773 ( -6.87%) Amean 3 3.9370 4.0910 ( -3.91%) Amean 5 6.6797 6.9357 ( -3.83%) Amean 7 9.3367 9.7150 ( -4.05%) Amean 12 15.7627 16.1400 ( -2.39%) Amean 18 23.5360 23.6890 ( -0.65%) Amean 24 31.0663 31.3137 ( -0.80%) Amean 30 38.7283 39.0037 ( -0.71%) Amean 32 41.3417 41.6097 ( -0.65%) hackbench-thread-sockets Amean 1 1.5250 1.6043 ( -5.20%) Amean 3 4.0897 4.2603 ( -4.17%) Amean 5 6.7760 7.0933 ( -4.68%) Amean 7 9.4817 9.9157 ( -4.58%) Amean 12 15.9610 16.3937 ( -2.71%) Amean 18 23.9543 24.3417 ( -1.62%) Amean 24 31.4400 31.7217 ( -0.90%) Amean 30 39.2457 39.5467 ( -0.77%) Amean 32 41.8267 42.1230 ( -0.71%) 2-socket Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (12 cores, 24 threads per socket), 64GB RAM hackbench-process-sockets Amean 1 1.0347 1.0880 ( -5.15%) Amean 4 1.7267 1.8527 ( -7.30%) Amean 7 2.6707 2.8110 ( -5.25%) Amean 12 4.1617 4.3383 ( -4.25%) Amean 21 7.0070 7.2600 ( -3.61%) Amean 30 9.9187 10.2397 ( -3.24%) Amean 48 15.6710 16.3923 ( -4.60%) Amean 79 24.7743 26.1247 ( -5.45%) Amean 110 34.3000 35.9307 ( -4.75%) Amean 141 44.2043 44.8010 ( -1.35%) Amean 172 54.2430 54.7260 ( -0.89%) Amean 192 60.6557 60.9777 ( -0.53%) hackbench-thread-sockets Amean 1 1.0610 1.1353 ( -7.01%) Amean 4 1.7543 1.9140 ( -9.10%) Amean 7 2.7840 2.9573 ( -6.23%) Amean 12 4.3813 4.4937 ( -2.56%) Amean 21 7.3460 7.5350 ( -2.57%) Amean 30 10.2313 10.5190 ( -2.81%) Amean 48 15.9700 16.5940 ( -3.91%) Amean 79 25.3973 26.6637 ( -4.99%) Amean 110 35.1087 36.4797 ( -3.91%) Amean 141 45.8220 46.3053 ( -1.05%) Amean 172 55.4917 55.7320 ( -0.43%) Amean 192 62.7490 62.5410 ( 0.33%) Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reported-by: Jann Horn <[email protected]> Cc: Roman Gushchin <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06mm: move kvmalloc-related functions to slab.hMatthew Wilcox (Oracle)2-34/+34
Not all files in the kernel should include mm.h. Migrating callers from kmalloc to kvmalloc is easier if the kvmalloc functions are in slab.h. [[email protected]: move the new kvrealloc() also] [[email protected]: drivers/hwmon/occ/p9_sbe.c needs slab.h] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Matthew Wilcox (Oracle) <[email protected]> Acked-by: Pekka Enberg <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06bpf: Stop caching subprog index in the bpf_pseudo_func insnMartin KaFai Lau1-0/+6
This patch is to fix an out-of-bound access issue when jit-ing the bpf_pseudo_func insn (i.e. ld_imm64 with src_reg == BPF_PSEUDO_FUNC) In jit_subprog(), it currently reuses the subprog index cached in insn[1].imm. This subprog index is an index into a few array related to subprogs. For example, in jit_subprog(), it is an index to the newly allocated 'struct bpf_prog **func' array. The subprog index was cached in insn[1].imm after add_subprog(). However, this could become outdated (and too big in this case) if some subprogs are completely removed during dead code elimination (in adjust_subprog_starts_after_remove). The cached index in insn[1].imm is not updated accordingly and causing out-of-bound issue in the later jit_subprog(). Unlike bpf_pseudo_'func' insn, the current bpf_pseudo_'call' insn is handling the DCE properly by calling find_subprog(insn->imm) to figure out the index instead of caching the subprog index. The existing bpf_adj_branches() will adjust the insn->imm whenever insn is added or removed. Instead of having two ways handling subprog index, this patch is to make bpf_pseudo_func works more like bpf_pseudo_call. First change is to stop caching the subprog index result in insn[1].imm after add_subprog(). The verification process will use find_subprog(insn->imm) to figure out the subprog index. Second change is in bpf_adj_branches() and have it to adjust the insn->imm for the bpf_pseudo_func insn also whenever insn is added or removed. Third change is in jit_subprog(). Like the bpf_pseudo_call handling, bpf_pseudo_func temporarily stores the find_subprog() result in insn->off. It is fine because the prog's insn has been finalized at this point. insn->off will be reset back to 0 later to avoid confusing the userspace prog dump tool. Fixes: 69c087ba6225 ("bpf: Add bpf_for_each_map_elem() helper") Signed-off-by: Martin KaFai Lau <[email protected]> Signed-off-by: Alexei Starovoitov <[email protected]> Link: https://lore.kernel.org/bpf/[email protected]
2021-11-05NFS: Remove the nfs4_label argument from nfs_setsecurityAnna Schumaker1-2/+1
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label argument from nfs_fhget()Anna Schumaker1-1/+1
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label argument from nfs_add_or_obtain()Anna Schumaker1-2/+1
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label argument from nfs_instantiate()Anna Schumaker1-1/+1
Pull the label from the fattr instead. Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs_setattrresAnna Schumaker1-1/+0
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs4_getattr_resAnna Schumaker1-3/+1
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the f_label from the nfs4_opendata and nfs_openresAnna Schumaker1-1/+0
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs4_lookupp_res structAnna Schumaker1-2/+1
Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the label from the nfs4_lookup_res structAnna Schumaker1-3/+1
And usethe fattr's label field instead. I also adjust function calls to remove labels along the way. Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs4_link_res structAnna Schumaker1-1/+0
Again, use the fattr's label field instead. Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs4_create_res structAnna Schumaker1-1/+0
Instead, use the label embedded in the attached fattr. Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Remove the nfs4_label from the nfs_entry structAnna Schumaker1-1/+0
And instead allocate the fattr using nfs_alloc_fattr_with_label() Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05NFS: Create a new nfs_alloc_fattr_with_label() functionAnna Schumaker1-0/+13
For creating fattrs with the label field already allocated for us. I also update nfs_free_fattr() to free the label in the end. Signed-off-by: Anna Schumaker <[email protected]> Signed-off-by: Trond Myklebust <[email protected]>
2021-11-05Merge branch 'pci/host/apple'Bjorn Helgaas1-0/+4
- Make of_phandle_args_to_fwspec() generally available (Marc Zyngier) - Allow matching of interrupt-maps local to interrupt controller or PCI device (Marc Zyngier) - Add Apple SoC (e.g., M1) PCIe host controller driver, which enables access to USB type-A, Ethernet, Wi-Fi, Bluetooth devices; these require additional drivers of their own (Alyssa Rosenzweig) - Add apple INTx, per-port, and MSI interrupt support (Marc Zyngier) - Configure apple Requester-ID-to-Stream-ID mapper for IOMMU (DART) support (Marc Zyngier) * pci/host/apple: PCI: apple: Configure RID to SID mapper on device addition iommu/dart: Exclude MSI doorbell from PCIe device IOVA range PCI: apple: Implement MSI support PCI: apple: Add INTx and per-port interrupt support PCI: apple: Set up reference clocks when probing PCI: apple: Add initial hardware bring-up PCI: of: Allow matching of an interrupt-map local to a PCI device of/irq: Allow matching of an interrupt-map local to an interrupt controller irqdomain: Make of_phandle_args_to_fwspec() generally available
2021-11-05Merge branch 'pci/misc'Bjorn Helgaas1-11/+0
- Tidy setup-irq.c comments (Pranay Sanghai) - Fix misspellings (Krzysztof Wilczyński) - Fix sprintf(), sscanf() format mismatches (Krzysztof Wilczyński) - Tidy cpqphp code formatting (Krzysztof Wilczyński) - Remove unused pci_pool wrappers, which have been replaced by dma_pool (Cai Huoqing) - Remove a redundant initialization in __pci_reset_function_locked() (Colin Ian King) - Use 'unsigned int' instead of 'unsigned' (Krzysztof Wilczyński) - Update PCI subsystem information in MAINTAINERS (Krzysztof Wilczyński) - Include generic <linux/> headers instead of <asm/> for cpqphp and vmd (Krzysztof Wilczyński) * pci/misc: PCI: vmd: Drop redundant includes of <asm/device.h>, <asm/msi.h> PCI: cpqphp: Use <linux/io.h> instead of <asm/io.h> MAINTAINERS: Update PCI subsystem information PCI: Prefer 'unsigned int' over bare 'unsigned' PCI: Remove redundant 'rc' initialization PCI: Remove unused pci_pool wrappers PCI: cpqphp: Format if-statement code block correctly PCI: Use unsigned to match sscanf("%x") in pci_dev_str_match_path() PCI: hv: Remove unnecessary use of %hx PCI: Correct misspelled and remove duplicated words PCI: Tidy comments
2021-11-05Merge branch 'pci/vpd'Bjorn Helgaas1-0/+2
- Add pci_read_vpd_any(), pci_write_vpd_any() to access VPD at arbitrary offsets (Heiner Kallweit) - Use VPD API to replace custom code in cxgb3 driver (Heiner Kallweit) * pci/vpd: cxgb3: Remove seeprom_write and use VPD API cxgb3: Use VPD API in t3_seeprom_wp() cxgb3: Remove t3_seeprom_read and use VPD API PCI/VPD: Use pci_read_vpd_any() in pci_vpd_size() PCI/VPD: Add pci_read/write_vpd_any()
2021-11-05Merge branch 'pci/switchtec'Bjorn Helgaas1-0/+1
- Return error to application when command execution fails because an out-of-band reset has cleared the device BARs, Memory Space Enable, etc (Kelvin Cao) - Fix MRPC error status handling issue (Kelvin Cao) - Mask out other bits when reading of management VEP instance ID (Kelvin Cao) - Return EOPNOTSUPP instead of ENOTSUPP from sysfs show functions (Kelvin Cao) - Add check of event support (Logan Gunthorpe) * pci/switchtec: PCI/switchtec: Add check of event support PCI/switchtec: Replace ENOTSUPP with EOPNOTSUPP PCI/switchtec: Update the way of getting management VEP instance ID PCI/switchtec: Fix a MRPC error status handling issue PCI/switchtec: Error out MRPC execution when MMIO reads fail
2021-11-05Merge branch 'pci/driver'Bjorn Helgaas1-2/+4
- Drop the struct pci_dev.driver pointer, which is redundant with the struct device.driver pointer (Uwe Kleine-König) * pci/driver: PCI: Remove struct pci_dev->driver PCI: Use to_pci_driver() instead of pci_dev->driver x86/pci/probe_roms: Use to_pci_driver() instead of pci_dev->driver perf/x86/intel/uncore: Use to_pci_driver() instead of pci_dev->driver powerpc/eeh: Use to_pci_driver() instead of pci_dev->driver usb: xhci: Use to_pci_driver() instead of pci_dev->driver cxl: Use to_pci_driver() instead of pci_dev->driver cxl: Factor out common dev->driver expressions xen/pcifront: Use to_pci_driver() instead of pci_dev->driver xen/pcifront: Drop pcifront_common_process() tests of pcidev, pdrv nfp: use dev_driver_string() instead of pci_dev->driver->name mlxsw: pci: Use dev_driver_string() instead of pci_dev->driver->name net: marvell: prestera: use dev_driver_string() instead of pci_dev->driver->name net: hns3: use dev_driver_string() instead of pci_dev->driver->name crypto: hisilicon - use dev_driver_string() instead of pci_dev->driver->name powerpc/eeh: Use dev_driver_string() instead of struct pci_dev->driver->name ssb: Use dev_driver_string() instead of pci_dev->driver->name bcma: simplify reference to driver name crypto: qat - simplify adf_enable_aer() scsi: message: fusion: Remove unused mpt_pci driver .probe() 'id' parameter PCI/ERR: Factor out common dev->driver expressions PCI: Drop pci_device_probe() test of !pci_dev->driver PCI: Drop pci_device_remove() test of pci_dev->driver PCI: Return NULL for to_pci_driver(NULL)
2021-11-05Merge branch 'pci/enumeration'Bjorn Helgaas1-1/+1
- Rename pcibios_add_device() to pcibios_device_add() since it's called from pci_device_add() (Oliver O'Halloran) - Don't try to enable AtomicOps on VFs, since they can only be enabled on the PF (Selvin Xavier) * pci/enumeration: PCI: Do not enable AtomicOps on VFs PCI: Rename pcibios_add_device() to pcibios_device_add()
2021-11-05Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds1-4/+4
Pull SCSI updates from James Bottomley: "This consists of the usual driver updates (ufs, smartpqi, lpfc, target, megaraid_sas, hisi_sas, qla2xxx) and minor updates and bug fixes. Notable core changes are the removal of scsi->tag which caused some churn in obsolete drivers and a sweep through all drivers to call scsi_done() directly instead of scsi->done() which removes a pointer indirection from the hot path and a move to register core sysfs files earlier, which means they're available to KOBJ_ADD processing, which necessitates switching all drivers to using attribute groups" * tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (279 commits) scsi: lpfc: Update lpfc version to 14.0.0.3 scsi: lpfc: Allow fabric node recovery if recovery is in progress before devloss scsi: lpfc: Fix link down processing to address NULL pointer dereference scsi: lpfc: Allow PLOGI retry if previous PLOGI was aborted scsi: lpfc: Fix use-after-free in lpfc_unreg_rpi() routine scsi: lpfc: Correct sysfs reporting of loop support after SFP status change scsi: lpfc: Wait for successful restart of SLI3 adapter during host sg_reset scsi: lpfc: Revert LOG_TRACE_EVENT back to LOG_INIT prior to driver_resource_setup() scsi: ufs: ufshcd-pltfrm: Fix memory leak due to probe defer scsi: ufs: mediatek: Avoid sched_clock() misuse scsi: mpt3sas: Make mpt3sas_dev_attrs static scsi: scsi_transport_sas: Add 22.5 Gbps link rate definitions scsi: target: core: Stop using bdevname() scsi: aha1542: Use memcpy_{from,to}_bvec() scsi: sr: Add error handling support for add_disk() scsi: sd: Add error handling support for add_disk() scsi: target: Perform ALUA group changes in one step scsi: target: Replace lun_tg_pt_gp_lock with rcu in I/O path scsi: target: Fix alua_tg_pt_gps_count tracking scsi: target: Fix ordered tag handling ...
2021-11-05Merge tag 'pinctrl-v5.16-1' of ↵Linus Torvalds1-2/+17
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pin control updates from Linus Walleij: "The most interesting aspect is that we now have initial support for the Apple pin controller as used in the M1 laptops and the iPhones which is a step forward for using Linux efficiently on this Apple silicon. Core changes: - Add infrastructure for per-parent interrupt data to support the Apple pin controller. New drivers: - New combined pin control and GPIO driver for the Apple SoC. This is used in all modern Apple silicon such as the M1 laptops but also in at least recent iPhone variants. - New subdriver for the Qualcomm SM6350 - New subdriver for the Qualcomm QCM2290 - New subdriver for the Qualcomm PM6350 - New subdriver for the Uniphier NX1 - New subdriver for the Samsung ExynosAutoV9 - New subdriver for the Mediatek MT7986 - New subdriver for the nVidia Tegra194 Improvements: - Improve power management in the Mediatek driver. - Improvements to the Renesas internal consistency checker. - Convert the Rockchip pin control device tree bindings to YAML. - Finally convert the Qualcomm PMIC SSBI and SPMI MPP GPIO driver to use hierarchical interrupts. - Convert the Qualcomm PMIC MPP device tree bindings to YAML" * tag 'pinctrl-v5.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: (55 commits) pinctrl: add pinctrl/GPIO driver for Apple SoCs dt-bindings: pinctrl: Add apple,npins property to apple,pinctrl dt-bindings: pinctrl: add #interrupt-cells to apple,pinctrl gpio: Allow per-parent interrupt data pinctrl: tegra: Fix warnings and error pinctrl: intel: Kconfig: Add configuration menu to Intel pin control pinctrl: tegra: Use correct offset for pin group pinctrl: core: fix possible memory leak in pinctrl_enable() pinctrl: bcm2835: Allow building driver as a module pinctrl: equilibrium: Fix function addition in multiple groups pinctrl: tegra: Add pinmux support for Tegra194 pinctrl: tegra: include lpdr pin properties pinctrl: mediatek: add support for MT7986 SoC dt-bindings: pinctrl: update bindings for MT7986 SoC pinctrl: microchip sgpio: use reset driver dt-bindings: pinctrl: pinctrl-microchip-sgpio: Add reset binding dt-bindings: pinctrl: qcom,pmic-mpp: switch to #interrupt-cells pinctrl: qcom: spmi-mpp: add support for hierarchical IRQ chip pinctrl: qcom: spmi-mpp: hardcode IRQ counts pinctrl: qcom: ssbi-mpp: add support for hierarchical IRQ chip ...
2021-11-05mfd: tps80031: Remove driverDmitry Osipenko1-637/+0
Driver was upstreamed in 2013 and never got a user, remove it. Signed-off-by: Dmitry Osipenko <[email protected]> Signed-off-by: Lee Jones <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-11-05mfd: max77686: Correct tab-based alignment of register addressesLuca Ceresoli1-13/+13
Some lines have an extra tab, remove them for proper visual alignment as present on the rest of this file. Signed-off-by: Luca Ceresoli <[email protected]> Reviewed-by: Krzysztof Kozlowski <[email protected]> Signed-off-by: Lee Jones <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-11-05mfd: tps65912: Make tps65912_device_exit() return voidUwe Kleine-König1-1/+1
Up to now tps65912_device_exit() returns zero unconditionally. Make it return void instead which makes it easier to see in the callers that there is no error to handle. Also the return value of i2c and spi remove callbacks is ignored anyway. Signed-off-by: Uwe Kleine-König <[email protected]> Signed-off-by: Lee Jones <[email protected]> Link: https://lore.kernel.org/r/[email protected]
2021-11-05mfd: da9063: Add support for latest EA silicon revisionCarlos de Paula1-0/+1
This update adds new regmap to support the latest EA silicon which will be selected based on the chip and variant information read from the device. Signed-off-by: Carlos de Paula <[email protected]> Reviewed-by: Adam Thomson <[email protected]> Signed-off-by: Lee Jones <[email protected]>
2021-11-05Merge branches 'ib-mfd-iio-touchscreen-clk-5.16', ↵Lee Jones1-25/+0
'ib-mfd-misc-regulator-5.16' and 'tb-mfd-from-regulator-5.16' into ibs-for-mfd-merged
2021-11-05pwm: Make it explicit that pwm_apply_state() might sleepUwe Kleine-König1-0/+4
At least some implementations sleep. So mark pwm_apply_state() with a might_sleep() to make callers aware. In the worst case this uncovers a valid atomic user, then we revert this patch and at least gained some more knowledge and then can work on a concept similar to gpio_get_value/gpio_get_value_cansleep. Signed-off-by: Uwe Kleine-König <[email protected]> Signed-off-by: Thierry Reding <[email protected]>
2021-11-05pwm: Add might_sleep() annotations for !CONFIG_PWM API functionsUwe Kleine-König1-0/+9
The normal implementations of these functions make use of mutexes. To make it obvious that these functions might sleep also add annotations to the dummy implementations in the !CONFIG_PWM case. Signed-off-by: Uwe Kleine-König <[email protected]> Signed-off-by: Thierry Reding <[email protected]>