aboutsummaryrefslogtreecommitdiff
path: root/lib/stackdepot.c
AgeCommit message (Collapse)AuthorFilesLines
2022-10-03stackdepot: reserve 5 extra bits in depot_stack_handle_tAlexander Potapenko1-5/+24
Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Also adapt KASAN to the new prototype by passing extra_bits=0, as KASAN does not intend to store additional information in the stack handle. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Reviewed-by: Marco Elver <[email protected]> Cc: Alexander Viro <[email protected]> Cc: Alexei Starovoitov <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Borislav Petkov <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: David Rientjes <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Eric Dumazet <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Herbert Xu <[email protected]> Cc: Ilya Leoshkevich <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mark Rutland <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Michael S. Tsirkin <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Petr Mladek <[email protected]> Cc: Stephen Rothwell <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Vegard Nossum <[email protected]> Cc: Vlastimil Babka <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-07-17lib/stackdepot: replace CONFIG_STACK_HASH_ORDER with automatic sizingVlastimil Babka1-10/+49
As Linus explained [1], setting the stackdepot hash table size as a config option is suboptimal, especially as stackdepot becomes a dependency of less "expert" subsystems than initially (e.g. DRM, networking, SLUB_DEBUG): : (a) it introduces a new compile-time question that isn't sane to ask : a regular user, but is now exposed to regular users. : (b) this by default uses 1MB of memory for a feature that didn't in : the past, so now if you have small machines you need to make sure you : make a special kernel config for them. Ideally we would employ rhashtable for fully automatic resizing, which should be feasible for many of the new users, but problematic for the original users with restricted context that call __stack_depot_save() with can_alloc == false, i.e. KASAN. However we can easily remove the config option and scale the hash table automatically with system memory. The STACK_HASH_MASK constant becomes stack_hash_mask variable and is used only in one mask operation, so the overhead should be negligible to none. For early allocation we can employ the existing alloc_large_system_hash() function and perform similar scaling for the late allocation. The existing limits of the config option (between 4k and 1M buckets) are preserved, and scaling factor is set to one bucket per 16kB memory so on 64bit the max 1M buckets (8MB memory) is achieved with 16GB system, while a 1GB system will use 512kB. Because KASAN is reported to need the maximum number of buckets even with smaller amounts of memory [2], set it as such when kasan_enabled(). If needed, the automatic scaling could be complemented with a boot-time kernel parameter, but it feels pointless to add it without a specific use case. [1] https://lore.kernel.org/all/CAHk-=wjC5nS+fnf6EzRD9yQRJApAhxx7gRB87ZV+pAWo9oVrTg@mail.gmail.com/ [2] https://lore.kernel.org/all/CACT4Y+Y4GZfXOru2z5tFPzFdaSUd+GFc6KVL=bsa0+1m197cQQ@mail.gmail.com/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reported-by: Linus Torvalds <[email protected]> Acked-by: Dmitry Vyukov <[email protected]> Cc: Marco Elver <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
2022-04-06lib/stackdepot: allow requesting early initialization dynamicallyVlastimil Babka1-22/+45
In a later patch we want to add stackdepot support for object owner tracking in slub caches, which is enabled by slub_debug boot parameter. This creates a bootstrap problem as some caches are created early in boot when slab_is_available() is false and thus stack_depot_init() tries to use memblock. But, as reported by Hyeonggon Yoo [1] we are already beyond memblock_free_all(). Ideally memblock allocation should fail, yet it succeeds, but later the system crashes, which is a separately handled issue. To resolve this boostrap issue in a robust way, this patch adds another way to request stack_depot_early_init(), which happens at a well-defined point of time. In addition to build-time CONFIG_STACKDEPOT_ALWAYS_INIT, code that's e.g. processing boot parameters (which happens early enough) can call a new function stack_depot_want_early_init(), which sets a flag that stack_depot_early_init() will check. In this patch we also convert page_owner to this approach. While it doesn't have the bootstrap issue as slub, it's also a functionality enabled by a boot param and can thus request stack_depot_early_init() with memblock allocation instead of later initialization with kvmalloc(). As suggested by Mike, make stack_depot_early_init() only attempt memblock allocation and stack_depot_init() only attempt kvmalloc(). Also change the latter to kvcalloc(). In both cases we can lose the explicit array zeroing, which the allocations do already. As suggested by Marco, provide empty implementations of the init functions for !CONFIG_STACKDEPOT builds to simplify the callers. [1] https://lore.kernel.org/all/YhnUcqyeMgCrWZbd@ip-172-31-19-208.ap-northeast-1.compute.internal/ Reported-by: Hyeonggon Yoo <[email protected]> Suggested-by: Mike Rapoport <[email protected]> Suggested-by: Marco Elver <[email protected]> Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Marco Elver <[email protected]> Reviewed-and-tested-by: Hyeonggon Yoo <[email protected]> Reviewed-by: Mike Rapoport <[email protected]> Acked-by: David Rientjes <[email protected]>
2022-01-22lib/stackdepot: always do filter_irq_stacks() in stack_depot_save()Marco Elver1-0/+13
The non-interrupt portion of interrupt stack traces before interrupt entry is usually arbitrary. Therefore, saving stack traces of interrupts (that include entries before interrupt entry) to stack depot leads to unbounded stackdepot growth. As such, use of filter_irq_stacks() is a requirement to ensure stackdepot can efficiently deduplicate interrupt stacks. Looking through all current users of stack_depot_save(), none (except KASAN) pass the stack trace through filter_irq_stacks() before passing it on to stack_depot_save(). Rather than adding filter_irq_stacks() to all current users of stack_depot_save(), it became clear that stack_depot_save() should simply do filter_irq_stacks(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Imran Khan <[email protected]> Cc: Chris Wilson <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Mika Kuoppala <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2022-01-22lib/stackdepot: allow optional init and stack_table allocation by kvmalloc()Vlastimil Babka1-5/+28
Currently, enabling CONFIG_STACKDEPOT means its stack_table will be allocated from memblock, even if stack depot ends up not actually used. The default size of stack_table is 4MB on 32-bit, 8MB on 64-bit. This is fine for use-cases such as KASAN which is also a config option and has overhead on its own. But it's an issue for functionality that has to be actually enabled on boot (page_owner) or depends on hardware (GPU drivers) and thus the memory might be wasted. This was raised as an issue [1] when attempting to add stackdepot support for SLUB's debug object tracking functionality. It's common to build kernels with CONFIG_SLUB_DEBUG and enable slub_debug on boot only when needed, or create only specific kmem caches with debugging for testing purposes. It would thus be more efficient if stackdepot's table was allocated only when actually going to be used. This patch thus makes the allocation (and whole stack_depot_init() call) optional: - Add a CONFIG_STACKDEPOT_ALWAYS_INIT flag to keep using the current well-defined point of allocation as part of mem_init(). Make CONFIG_KASAN select this flag. - Other users have to call stack_depot_init() as part of their own init when it's determined that stack depot will actually be used. This may depend on both config and runtime conditions. Convert current users which are page_owner and several in the DRM subsystem. Same will be done for SLUB later. - Because the init might now be called after the boot-time memblock allocation has given all memory to the buddy allocator, change stack_depot_init() to allocate stack_table with kvmalloc() when memblock is no longer available. Also handle allocation failure by disabling stackdepot (could have theoretically happened even with memblock allocation previously), and don't unnecessarily align the memblock allocation to its own size anymore. [1] https://lore.kernel.org/all/CAMuHMdW=eoVzM1Re5FVoEN87nKfiLmM2+Ah7eNu2KXEhCvbZyA@mail.gmail.com/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Dmitry Vyukov <[email protected]> Reviewed-by: Marco Elver <[email protected]> # stackdepot Cc: Marco Elver <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: Thomas Zimmermann <[email protected]> Cc: David Airlie <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Oliver Glitta <[email protected]> Cc: Imran Khan <[email protected]> From: Colin Ian King <[email protected]> Subject: lib/stackdepot: fix spelling mistake and grammar in pr_err message There is a spelling mistake of the work allocation so fix this and re-phrase the message to make it easier to read. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Colin Ian King <[email protected]> Cc: Vlastimil Babka <[email protected]> From: Vlastimil Babka <[email protected]> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup On FLATMEM, we call page_ext_init_flatmem_late() just before kmem_cache_init() which means stack_depot_init() (called by page owner init) will not recognize properly it should use kvmalloc() and not memblock_alloc(). memblock_alloc() will also not issue a warning and return a block memory that can be invalid and cause kernel page fault when saving stacks, as reported by the kernel test robot [1]. Fix this by moving page_ext_init_flatmem_late() below kmem_cache_init() so that slab_is_available() is true during stack_depot_init(). SPARSEMEM doesn't have this issue, as it doesn't do page_ext_init_flatmem_late(), but a different page_ext_init() even later in the boot process. Thanks to Mike Rapoport for pointing out the FLATMEM init ordering issue. While at it, also actually resolve a checkpatch warning in stack_depot_init() from DRM CI, which was supposed to be in the original patch already. [1] https://lore.kernel.org/all/20211014085450.GC18719@xsang-OptiPlex-9020/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reported-by: kernel test robot <[email protected]> Cc: Mike Rapoport <[email protected]> Cc: Stephen Rothwell <[email protected]> From: Vlastimil Babka <[email protected]> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup3 Due to cd06ab2fd48f ("drm/locking: add backtrace for locking contended locks without backoff") landing recently to -next adding a new stack depot user in drivers/gpu/drm/drm_modeset_lock.c we need to add an appropriate call to stack_depot_init() there as well. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Naresh Kamboju <[email protected]> Cc: Marco Elver <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: Thomas Zimmermann <[email protected]> Cc: David Airlie <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Oliver Glitta <[email protected]> Cc: Imran Khan <[email protected]> Cc: Stephen Rothwell <[email protected]> From: Vlastimil Babka <[email protected]> Subject: lib/stackdepot: allow optional init and stack_table allocation by kvmalloc() - fixup4 Due to 4e66934eaadc ("lib: add reference counting tracking infrastructure") landing recently to net-next adding a new stack depot user in lib/ref_tracker.c we need to add an appropriate call to stack_depot_init() there as well. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Eric Dumazet <[email protected]> Cc: Jiri Slab <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-09lib, stackdepot: add helper to print stack entries into bufferImran Khan1-0/+25
To print stack entries into a buffer, users of stackdepot, first get a list of stack entries using stack_depot_fetch and then print this list into a buffer using stack_trace_snprint. Provide a helper in stackdepot for this purpose. Also change above mentioned users to use this helper. [[email protected]: fix build error] Link: https://lkml.kernel.org/r/[email protected] [[email protected]: export stack_depot_snprint() to modules] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Imran Khan <[email protected]> Suggested-by: Vlastimil Babka <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Acked-by: Jani Nikula <[email protected]> [i915] Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Airlie <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: Thomas Zimmermann <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-09lib, stackdepot: add helper to print stack entriesImran Khan1-0/+18
To print a stack entries, users of stackdepot, first use stack_depot_fetch to get a list of stack entries and then use stack_trace_print to print this list. Provide a helper in stackdepot to print stack entries based on stackdepot handle. Also change above mentioned users to use this helper. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Imran Khan <[email protected]> Suggested-by: Vlastimil Babka <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: David Airlie <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: Thomas Zimmermann <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-09lib, stackdepot: check stackdepot handle before accessing slabsImran Khan1-0/+3
Patch series "lib, stackdepot: check stackdepot handle before accessing slabs", v2. PATCH-1: Checks validity of a stackdepot handle before proceeding to access stackdepot slab/objects. PATCH-2: Adds a helper in stackdepot, to allow users to print stack entries just by specifying the stackdepot handle. It also changes such users to use this new interface. PATCH-3: Adds a helper in stackdepot, to allow users to print stack entries into buffers just by specifying the stackdepot handle and destination buffer. It also changes such users to use this new interface. This patch (of 3): stack_depot_save allocates slabs that will be used for storing objects in future.If this slab allocation fails we may get to a situation where space allocation for a new stack_record fails, causing stack_depot_save to return 0 as handle. If user of this handle ends up invoking stack_depot_fetch with this handle value, current implementation of stack_depot_fetch will end up using slab from wrong index. To avoid this check handle value at the beginning. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Imran Khan <[email protected]> Suggested-by: Vlastimil Babka <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Cc: Geert Uytterhoeven <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: Maxime Ripard <[email protected]> Cc: Thomas Zimmermann <[email protected]> Cc: David Airlie <[email protected]> Cc: Daniel Vetter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06stacktrace: move filter_irq_stacks() to kernel/stacktrace.cMarco Elver1-24/+0
filter_irq_stacks() has little to do with the stackdepot implementation, except that it is usually used by users (such as KASAN) of stackdepot to reduce the stack trace. However, filter_irq_stacks() itself is not useful without a stack trace as obtained by stack_trace_save() and friends. Therefore, move filter_irq_stacks() to kernel/stacktrace.c, so that new users of filter_irq_stacks() do not have to start depending on STACKDEPOT only for filter_irq_stacks(). Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Acked-by: Dmitry Vyukov <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Jann Horn <[email protected]> Cc: Aleksandr Nogikh <[email protected]> Cc: Taras Madan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06lib/stackdepot: introduce __stack_depot_save()Marco Elver1-6/+37
Add __stack_depot_save(), which provides more fine-grained control over stackdepot's memory allocation behaviour, in case stackdepot runs out of "stack slabs". Normally stackdepot uses alloc_pages() in case it runs out of space; passing can_alloc==false to __stack_depot_save() prohibits this, at the cost of more likely failure to record a stack trace. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Tested-by: Shuah Khan <[email protected]> Acked-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Taras Madan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: Walter Wu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-11-06lib/stackdepot: remove unused function argumentMarco Elver1-5/+4
alloc_flags in depot_alloc_stack() is no longer used; remove it. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Marco Elver <[email protected]> Tested-by: Shuah Khan <[email protected]> Acked-by: Sebastian Andrzej Siewior <[email protected]> Reviewed-by: Andrey Konovalov <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: "Gustavo A. R. Silva" <[email protected]> Cc: Lai Jiangshan <[email protected]> Cc: Taras Madan <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: Walter Wu <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-07-08lib: fix spelling mistakesZhen Lei1-1/+1
Fix some spelling mistakes in comments: permanentely ==> permanently wont ==> won't remaning ==> remaining succed ==> succeed shouldnt ==> shouldn't alpha-numeric ==> alphanumeric storeing ==> storing funtion ==> function documenation ==> documentation Determin ==> Determine intepreted ==> interpreted ammount ==> amount obious ==> obvious interupts ==> interrupts occured ==> occurred asssociated ==> associated taking into acount ==> taking into account squence ==> sequence stil ==> still contiguos ==> contiguous matchs ==> matches Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zhen Lei <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-05-06lib: stackdepot: turn depot_lock spinlock to raw_spinlockZqiang1-3/+3
In RT system, the spin_lock will be replaced by sleepable rt_mutex lock, in __call_rcu(), disable interrupts before calling kasan_record_aux_stack(), will trigger this calltrace: BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:951 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 19, name: pgdatinit0 Call Trace: ___might_sleep.cold+0x1b2/0x1f1 rt_spin_lock+0x3b/0xb0 stack_depot_save+0x1b9/0x440 kasan_save_stack+0x32/0x40 kasan_record_aux_stack+0xa5/0xb0 __call_rcu+0x117/0x880 __exit_signal+0xafb/0x1180 release_task+0x1d6/0x480 exit_notify+0x303/0x750 do_exit+0x678/0xcf0 kthread+0x364/0x4f0 ret_from_fork+0x22/0x30 Replace spinlock with raw_spinlock. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Zqiang <[email protected]> Reported-by: Andrew Halaney <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Gustavo A. R. Silva <[email protected]> Cc: Vijayanand Jitta <[email protected]> Cc: Vinayak Menon <[email protected]> Cc: Yogesh Lal <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-26lib: stackdepot: fix ignoring return value warningVijayanand Jitta1-2/+4
Fix the below ignoring return value warning for kstrtobool in is_stack_depot_disabled function. lib/stackdepot.c: In function 'is_stack_depot_disabled': lib/stackdepot.c:154:2: warning: ignoring return value of 'kstrtobool' declared with attribute 'warn_unused_result' [-Wunused-result] Link: https://lkml.kernel.org/r/[email protected] Fixes: b9779abb09a8 ("lib: stackdepot: add support to disable stack depot") Signed-off-by: Vijayanand Jitta <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-26lib: stackdepot: add support to disable stack depotVijayanand Jitta1-4/+28
Add a kernel parameter stack_depot_disable to disable stack depot. So that stack hash table doesn't consume any memory when stack depot is disabled. The use case is CONFIG_PAGE_OWNER without page_owner=on. Without this patch, stackdepot will consume the memory for the hashtable. By default, it's 8M which is never trivial. With this option, in CONFIG_PAGE_OWNER configured system, page_owner=off, stack_depot_disable in kernel command line, we could save the wasted memory for the hashtable. [[email protected]: fix CONFIG_STACKDEPOT=n build] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Vinayak Menon <[email protected]> Signed-off-by: Vijayanand Jitta <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Yogesh Lal <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2021-02-26lib: stackdepot: add support to configure STACK_HASH_SIZEYogesh Lal1-2/+1
Use CONFIG_STACK_HASH_ORDER to configure STACK_HASH_SIZE. Aim is to have configurable value for STACK_HASH_SIZE, so depend on use case one can configure it. One example is of Page Owner, CONFIG_PAGE_OWNER works only if page_owner=on via kernel parameter on CONFIG_PAGE_OWNER configured system. Thus, unless admin enable it via command line option, the stackdepot will just waste 8M memory without any customer. Making it configurable and use lower value helps to enable features like CONFIG_PAGE_OWNER without any significant overhead. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yogesh Lal <[email protected]> Signed-off-by: Vinayak Menon <[email protected]> Signed-off-by: Vijayanand Jitta <[email protected]> Reviewed-by: Minchan Kim <[email protected]> Reviewed-by: Alexander Potapenko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-12-15lib/stackdepot.c: use array_size() helper in jhash2()Gustavo A. R. Silva1-2/+2
Use array_size() helper instead of the open-coded version in jhash2(). These sorts of multiplication factors need to be wrapped in array_size(). Also, use the preferred form for passing the size of an object type. Link: https://lkml.kernel.org/r/cb8a682e4bba4dbddd2bd8aca7f8c02fea89639b.1601565471.git.gustavoars@kernel.org Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-12-15lib/stackdepot.c: use flex_array_size() helper in memcpy()Gustavo A. R. Silva1-1/+1
Make use of the flex_array_size() helper to calculate the size of a flexible array member within an enclosing structure. This helper offers defense-in-depth against potential integer overflows, while at the same time makes it explicitly clear that we are dealing with a flexible array member. Link: https://lkml.kernel.org/r/186e37fe07196ee41a0e562fa8a8cb7a01112ec5.1601565471.git.gustavoars@kernel.org Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-12-15lib/stackdepot.c: replace one-element array with flexible-array memberGustavo A. R. Silva1-3/+2
Patch series "lib/stackdepot.c: Replace one-element array with flexible-array member". This series aims to replace a one-element array with a flexible-array member. Also, make use of the struct_size(), flexible_array_size() and array_size() helpers. This patch (of 3): There is a regular need in the kernel to provide a way to declare having a dynamically sized set of trailing elements in a structure. Kernel code should always use “flexible array members”[1] for these cases. The older style of one-element or zero-length arrays should no longer be used[2]. Refactor the code according to the use of a flexible-array member in struct stack_record, instead of a one-element array, and use the struct_size() helper to calculate the size for the allocation. [1] https://en.wikipedia.org/wiki/Flexible_array_member [2] https://www.kernel.org/doc/html/v5.9-rc1/process/deprecated.html#zero-length-and-one-element-arrays Link: https://lkml.kernel.org/r/[email protected] Link: https://lore.kernel.org/lkml/5f75876b.x9zdN10esiC0qLHV%[email protected]/ Link: https://lkml.kernel.org/r/2f1e6a17aaa891ad9c58817cf0a10b8ab8894f59.1601565471.git.gustavoars@kernel.org Signed-off-by: Gustavo A. R. Silva <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07kasan: stackdepot: move filter_irq_stacks() to stackdepot.cAlexander Potapenko1-0/+24
filter_irq_stacks() can be used by other tools (e.g. KMSAN), so it needs to be moved to a common location. lib/stackdepot.c seems a good place, as filter_irq_stacks() is usually applied to the output of stack_trace_save(). This patch has been previously mailed as part of KMSAN RFC patch series. [[email protected]: nds32: linker script: add SOFTIRQENTRY_TEXT\ Link: http://lkml.kernel.org/r/[email protected] [[email protected]: add IRQENTRY_TEXT and SOFTIRQENTRY_TEXT to linker script] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Cc: Vegard Nossum <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Marco Elver <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Sergey Senozhatsky <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Linus Torvalds <[email protected]>
2020-04-07lib/stackdepot.c: check depot_index before accessing the stack slabAlexander Potapenko1-2/+13
Avoid crashes on corrupted stack ids. Despite stack ID corruption may indicate other bugs in the program, we'd better fail gracefully on such IDs instead of crashing the kernel. This patch has been previously mailed as part of KMSAN RFC patch series. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Cc: Vegard Nossum <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Marco Elver <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Sergey Senozhatsky <[email protected]> From: Dan Carpenter <[email protected]> Subject: lib/stackdepot.c: fix a condition in stack_depot_fetch() We should check for a NULL pointer first before adding the offset. Otherwise if the pointer is NULL and the offset is non-zero, it will lead to an Oops. Fixes: d45048e65a59 ("lib/stackdepot.c: check depot_index before accessing the stack slab") Signed-off-by: Dan Carpenter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Acked-by: Alexander Potapenko <[email protected]> Link: http://lkml.kernel.org/r/20200312113006.GA20562@mwanda Signed-off-by: Linus Torvalds <[email protected]>
2020-02-21lib/stackdepot.c: fix global out-of-bounds in stack_slabsAlexander Potapenko1-2/+6
Walter Wu has reported a potential case in which init_stack_slab() is called after stack_slabs[STACK_ALLOC_MAX_SLABS - 1] has already been initialized. In that case init_stack_slab() will overwrite stack_slabs[STACK_ALLOC_MAX_SLABS], which may result in a memory corruption. Link: http://lkml.kernel.org/r/[email protected] Fixes: cd11016e5f521 ("mm, kasan: stackdepot implementation. Enable stackdepot for SLAB") Signed-off-by: Alexander Potapenko <[email protected]> Reported-by: Walter Wu <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Matthias Brugger <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Josh Poimboeuf <[email protected]> Cc: Kate Stewart <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2019-08-19lib/stackdepot: Fix outdated commentsMiles Chen1-2/+2
Replace "depot_save_stack" with "stack_depot_save" in code comments because depot_save_stack() was replaced in commit c0cfc337264c ("lib/stackdepot: Provide functions which operate on plain storage arrays") and removed in commit 56d8f079c51a ("lib/stackdepot: Remove obsolete functions") Signed-off-by: Miles Chen <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Link: https://lkml.kernel.org/r/[email protected]
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 174Thomas Gleixner1-10/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 655 file(s). Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Allison Randal <[email protected]> Reviewed-by: Kate Stewart <[email protected]> Reviewed-by: Richard Fontana <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Greg Kroah-Hartman <[email protected]>
2019-04-29lib/stackdepot: Remove obsolete functionsThomas Gleixner1-20/+0
No more users of the struct stack_trace based interfaces. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Acked-by: Alexander Potapenko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2019-04-29lib/stackdepot: Provide functions which operate on plain storage arraysThomas Gleixner1-19/+51
The struct stack_trace indirection in the stack depot functions is a truly pointless excercise which requires horrible code at the callsites. Provide interfaces based on plain storage arrays. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Acked-by: Alexander Potapenko <[email protected]> Cc: Andy Lutomirski <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: [email protected] Cc: David Rientjes <[email protected]> Cc: Catalin Marinas <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: [email protected] Cc: Mike Rapoport <[email protected]> Cc: Akinobu Mita <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: [email protected] Cc: Robin Murphy <[email protected]> Cc: Marek Szyprowski <[email protected]> Cc: Johannes Thumshirn <[email protected]> Cc: David Sterba <[email protected]> Cc: Chris Mason <[email protected]> Cc: Josef Bacik <[email protected]> Cc: [email protected] Cc: [email protected] Cc: Mike Snitzer <[email protected]> Cc: Alasdair Kergon <[email protected]> Cc: Daniel Vetter <[email protected]> Cc: [email protected] Cc: Joonas Lahtinen <[email protected]> Cc: Maarten Lankhorst <[email protected]> Cc: [email protected] Cc: David Airlie <[email protected]> Cc: Jani Nikula <[email protected]> Cc: Rodrigo Vivi <[email protected]> Cc: Tom Zanussi <[email protected]> Cc: Miroslav Benes <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/[email protected]
2018-02-06lib/stackdepot.c: use a non-instrumented version of memcmp()Alexander Potapenko1-3/+16
stackdepot used to call memcmp(), which compiler tools normally instrument, therefore every lookup used to unnecessarily call instrumented code. This is somewhat ok in the case of KASAN, but under KMSAN a lot of time was spent in the instrumentation. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Dmitry Vyukov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-11-11lib/stackdepot: export save/fetch stack for driversChris Wilson1-0/+2
Some drivers would like to record stacktraces in order to aide leak tracing. As stackdepot already provides a facility for only storing the unique traces, thereby reducing the memory required, export that functionality for use by drivers. The code was originally created for KASAN and moved under lib in commit cd11016e5f521 ("mm, kasan: stackdepot implementation. Enable stackdepot for SLAB") so that it could be shared with mm/. In turn, we want to share it now with drivers. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Chris Wilson <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Daniel Vetter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-10-27lib/stackdepot.c: bump stackdepot capacity from 16MB to 128MBDmitry Vyukov1-1/+1
KASAN uses stackdepot to memorize stacks for all kmalloc/kfree calls. Current stackdepot capacity is 16MB (1024 top level entries x 4 pages on second level). Size of each stack is (num_frames + 3) * sizeof(long). Which gives us ~84K stacks. This capacity was chosen empirically and it is enough to run kernel normally. However, when lots of configs are enabled and a fuzzer tries to maximize code coverage, it easily hits the limit within tens of minutes. I've tested for long a time with number of top level entries bumped 4x (4096). And I think I've seen overflow only once. But I don't have all configs enabled and code coverage has not reached maximum yet. So bump it 8x to 8192. Since we have two-level table, memory cost of this is very moderate -- currently the top-level table is 8KB, with this patch it is 64KB, which is negligible under KASAN. Here is some approx math. 128MB allows us to memorize ~670K stacks (assuming stack is ~200b). I've grepped kernel for kmalloc|kfree|kmem_cache_alloc|kmem_cache_free| kzalloc|kstrdup|kstrndup|kmemdup and it gives ~60K matches. Most of alloc/free call sites are reachable with only one stack. But some utility functions can have large fanout. Assuming average fanout is 5x, total number of alloc/free stacks is ~300K. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Dmitry Vyukov <[email protected]> Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Baozeng Ding <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-07-28lib/stackdepot.c: use __GFP_NOWARN for stack allocationsKirill A. Shutemov1-0/+1
This (large, atomic) allocation attempt can fail. We expect and handle that, so avoid the scary warning. Link: http://lkml.kernel.org/r/[email protected] Cc: Andrey Ryabinin <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Rik van Riel <[email protected]> Cc: David Rientjes <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-05-05lib/stackdepot: avoid to return 0 handleJoonsoo Kim1-1/+5
Recently, we allow to save the stacktrace whose hashed value is 0. It causes the problem that stackdepot could return 0 even if in success. User of stackdepot cannot distinguish whether it is success or not so we need to solve this problem. In this patch, 1 bit are added to handle and make valid handle none 0 by setting this bit. After that, valid handle will not be 0 and 0 handle will represent failure correctly. Fixes: 33334e25769c ("lib/stackdepot.c: allow the stack trace hash to be zero") Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Joonsoo Kim <[email protected]> Cc: Alexander Potapenko <[email protected]> Cc: Andrey Ryabinin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-04-28lib/stackdepot.c: allow the stack trace hash to be zeroAlexander Potapenko1-4/+0
Do not bail out from depot_save_stack() if the stack trace has zero hash. Initially depot_save_stack() silently dropped stack traces with zero hashes, however there's actually no point in reserving this zero value. Reported-by: Joonsoo Kim <[email protected]> Signed-off-by: Alexander Potapenko <[email protected]> Acked-by: Andrey Ryabinin <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-25mm, kasan: stackdepot implementation. Enable stackdepot for SLABAlexander Potapenko1-0/+284
Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot will allow KASAN store allocation/deallocation stack traces for memory chunks. The stack traces are stored in a hash table and referenced by handles which reside in the kasan_alloc_meta and kasan_free_meta structures in the allocated memory chunks. IRQ stack traces are cut below the IRQ entry point to avoid unnecessary duplication. Right now stackdepot support is only enabled in SLAB allocator. Once KASAN features in SLAB are on par with those in SLUB we can switch SLUB to stackdepot as well, thus removing the dependency on SLUB stack bookkeeping, which wastes a lot of memory. This patch is based on the "mm: kasan: stack depots" patch originally prepared by Dmitry Chernenkov. Joonsoo has said that he plans to reuse the stackdepot code for the mm/page_owner.c debugging facility. [[email protected]: s/depot_stack_handle/depot_stack_handle_t] [[email protected]: comment style fixes] Signed-off-by: Alexander Potapenko <[email protected]> Signed-off-by: Andrey Ryabinin <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Andrey Konovalov <[email protected]> Cc: Dmitry Vyukov <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Konstantin Serebryany <[email protected]> Cc: Dmitry Chernenkov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>