diff options
| author | Yu Zhao <[email protected]> | 2020-01-30 22:11:57 -0800 |
|---|---|---|
| committer | Linus Torvalds <[email protected]> | 2020-01-31 10:30:36 -0800 |
| commit | 90e9f6a66c78fdf3c2e5884ffe97bfc2736863c2 (patch) | |
| tree | 78a55a9c67eaf037b93a1d8141bc9b747a0c5bf9 /tools/perf/scripts/python/bin | |
| parent | 25b69918d9f16c729193cc7c6f48f0b8991813f9 (diff) | |
mm/slub.c: avoid slub allocation while holding list_lock
If we are already under list_lock, don't call kmalloc(). Otherwise we
will run into a deadlock because kmalloc() also tries to grab the same
lock.
Fix the problem by using a static bitmap instead.
WARNING: possible recursive locking detected
--------------------------------------------
mount-encrypted/4921 is trying to acquire lock:
(&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437
but task is already holding lock:
(&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&n->list_lock)->rlock);
lock(&(&n->list_lock)->rlock);
*** DEADLOCK ***
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Yu Zhao <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Tetsuo Handa <[email protected]>
Cc: Yu Zhao <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Diffstat (limited to 'tools/perf/scripts/python/bin')
0 files changed, 0 insertions, 0 deletions