diff options
author | Stanislaw Gruszka <[email protected]> | 2012-01-10 15:07:32 -0800 |
---|---|---|
committer | Linus Torvalds <[email protected]> | 2012-01-10 16:30:43 -0800 |
commit | fc8d8620d39dbbaf412b1b9247d77d196d92adb9 (patch) | |
tree | aab70c3807026701f326bb8a88c81845da059d9b | |
parent | c6968e73b90c2a2fb9a32d4bad249f8f70f70125 (diff) |
slub: min order when debug_guardpage_minorder > 0
Disable slub debug facilities and allocate slabs at minimal order when
debug_guardpage_minorder > 0 to increase probability to catch random
memory corruption by cpu exception.
Signed-off-by: Stanislaw Gruszka <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Stanislaw Gruszka <[email protected]>
Cc: Pekka Enberg <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
-rw-r--r-- | mm/slub.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 025f6ac51569..d99acbf14e01 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3654,6 +3654,9 @@ void __init kmem_cache_init(void) struct kmem_cache *temp_kmem_cache_node; unsigned long kmalloc_size; + if (debug_guardpage_minorder()) + slub_max_order = 0; + kmem_size = offsetof(struct kmem_cache, node) + nr_node_ids * sizeof(struct kmem_cache_node *); |