diff options
author | Alexander Potapenko <[email protected]> | 2016-07-28 15:49:04 -0700 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2016-07-28 16:07:41 -0700 |
commit | c146a2b98eb5898eb0fab15a332257a4102ecae9 (patch) | |
tree | 87d665fec1265305deee3c9ab1028e0d9c58b364 | |
parent | 734537c9cb725fc8005ee7a25c48f1ad10fce5df (diff) |
mm, kasan: account for object redzone in SLUB's nearest_obj()
When looking up the nearest SLUB object for a given address, correctly
calculate its offset if SLAB_RED_ZONE is enabled for that cache.
Previously, when KASAN had detected an error on an object from a cache
with SLAB_RED_ZONE set, the actual start address of the object was
miscalculated, which led to random stacks having been reported.
When looking up the nearest SLUB object for a given address, correctly
calculate its offset if SLAB_RED_ZONE is enabled for that cache.
Fixes: 7ed2f9e663854db ("mm, kasan: SLAB support")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Alexander Potapenko <[email protected]>
Cc: Andrey Konovalov <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Steven Rostedt (Red Hat) <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Kostya Serebryany <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Kuthonuzo Luruo <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | include/linux/slub_def.h | 10 | ||||
-rw-r--r-- | mm/slub.c | 2 |
2 files changed, 7 insertions, 5 deletions
diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index 5624c1f3eb0a..cf501cf8e6db 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -119,15 +119,17 @@ static inline void sysfs_slab_remove(struct kmem_cache *s) void object_err(struct kmem_cache *s, struct page *page, u8 *object, char *reason); +void *fixup_red_left(struct kmem_cache *s, void *p); + static inline void *nearest_obj(struct kmem_cache *cache, struct page *page, void *x) { void *object = x - (x - page_address(page)) % cache->size; void *last_object = page_address(page) + (page->objects - 1) * cache->size; - if (unlikely(object > last_object)) - return last_object; - else - return object; + void *result = (unlikely(object > last_object)) ? last_object : object; + + result = fixup_red_left(cache, result); + return result; } #endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/slub.c b/mm/slub.c index f9da8716b8b3..1cdde1a5ba5f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -124,7 +124,7 @@ static inline int kmem_cache_debug(struct kmem_cache *s) #endif } -static inline void *fixup_red_left(struct kmem_cache *s, void *p) +inline void *fixup_red_left(struct kmem_cache *s, void *p) { if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) p += s->red_left_pad; |