diff options
author | Vlastimil Babka <vbabka@suse.cz> | 2024-09-13 11:08:27 +0200 |
---|---|---|
committer | Vlastimil Babka <vbabka@suse.cz> | 2024-09-13 11:08:27 +0200 |
commit | a715e94dbda4ece41aac49b7b7ff8ddb55a7fe08 (patch) | |
tree | 337ca3751374479574ff2d2af58a8759b15e237b /mm/slab.h | |
parent | e02147cb703412fa13dd31908c734d7fb2314f55 (diff) | |
parent | 9028cdeb38e1f37d63cb3154799dd259b67e879e (diff) |
Merge branch 'slab/for-6.12/rcu_barriers' into slab/for-next
Merge most of SLUB feature work for 6.12:
- Barrier for pending kfree_rcu() in kmem_cache_destroy() and associated
refactoring of the destroy path (Vlastimil Babka)
- CONFIG_SLUB_RCU_DEBUG to allow KASAN catching UAF bugs in
SLAB_TYPESAFE_BY_RCU caches (Jann Horn)
- kmem_cache_charge() for delayed kmemcg charging (Shakeel Butt)
Diffstat (limited to 'mm/slab.h')
-rw-r--r-- | mm/slab.h | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/mm/slab.h b/mm/slab.h index dcdb56b8e7f5..9f907e930609 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -443,6 +443,13 @@ static inline bool is_kmalloc_cache(struct kmem_cache *s) return (s->flags & SLAB_KMALLOC); } +static inline bool is_kmalloc_normal(struct kmem_cache *s) +{ + if (!is_kmalloc_cache(s)) + return false; + return !(s->flags & (SLAB_CACHE_DMA|SLAB_ACCOUNT|SLAB_RECLAIM_ACCOUNT)); +} + /* Legal flag mask for kmem_cache_create(), for various configurations */ #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | \ SLAB_CACHE_DMA32 | SLAB_PANIC | \ |