aboutsummaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
2016-03-15memory-hotplug: add automatic onlining policy for the newly added memoryVitaly Kuznetsov1-2/+15
Currently, all newly added memory blocks remain in 'offline' state unless someone onlines them, some linux distributions carry special udev rules like: SUBSYSTEM=="memory", ACTION=="add", ATTR{state}=="offline", ATTR{state}="online" to make this happen automatically. This is not a great solution for virtual machines where memory hotplug is being used to address high memory pressure situations as such onlining is slow and a userspace process doing this (udev) has a chance of being killed by the OOM killer as it will probably require to allocate some memory. Introduce default policy for the newly added memory blocks in /sys/devices/system/memory/auto_online_blocks file with two possible values: "offline" which preserves the current behavior and "online" which causes all newly added memory blocks to go online as soon as they're added. The default is "offline". Signed-off-by: Vitaly Kuznetsov <[email protected]> Reviewed-by: Daniel Kiper <[email protected]> Cc: Jonathan Corbet <[email protected]> Cc: Greg Kroah-Hartman <[email protected]> Cc: Daniel Kiper <[email protected]> Cc: Dan Williams <[email protected]> Cc: Tang Chen <[email protected]> Cc: David Vrabel <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Naoya Horiguchi <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: Mel Gorman <[email protected]> Cc: "K. Y. Srinivasan" <[email protected]> Cc: Igor Mammedov <[email protected]> Cc: Kay Sievers <[email protected]> Cc: Konrad Rzeszutek Wilk <[email protected]> Cc: Boris Ostrovsky <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/memory.c: make apply_to_page_range() more robustMika Penttilä1-1/+3
Arm and arm64 used to trigger this BUG_ON() - this has now been fixed. But a WARN_ON() here is sufficient to catch future buggy callers. Signed-off-by: Mika Penttilä <[email protected]> Reviewed-by: Pekka Enberg <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/mempolicy.c: skip VM_HUGETLB and VM_MIXEDMAP VMA for lazy mbindLiang Chen1-1/+3
VM_HUGETLB and VM_MIXEDMAP vma needs to be excluded to avoid compound pages being marked for migration and unexpected COWs when handling hugetlb fault. Thanks to Naoya Horiguchi for reminding me on these checks. Signed-off-by: Liang Chen <[email protected]> Signed-off-by: Gavin Guo <[email protected]> Suggested-by: Naoya Horiguchi <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: SeongJae Park <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/memory-failure.c: remove useless "undef"sWang Xiaoqiang1-2/+0
Remove the useless #undef, since the corresponding #define has already been removed. Signed-off-by: Wang Xiaoqiang <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/madvise: pass return code of memory_failure() to userspaceNaoya Horiguchi1-2/+3
Currently the return value of memory_failure() is not passed to userspace when madvise(MADV_HWPOISON) is used. This is inconvenient for test programs that want to know the result of error handling. So let's return it to the caller as we already do in the MADV_SOFT_OFFLINE case. Signed-off-by: Naoya Horiguchi <[email protected]> Cc: Chen Gong <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, sl[au]b: print gfp_flags as strings in slab_out_of_memory()Vlastimil Babka2-8/+6
We can now print gfp_flags more human-readable. Make use of this in slab_out_of_memory() for SLUB and SLAB. Also convert the SLAB variant it to pr_warn() along the way. Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: Joonsoo Kim <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page_poisoning.c: allow for zero poisoningLaura Abbott4-5/+37
By default, page poisoning uses a poison value (0xaa) on free. If this is changed to 0, the page is not only sanitized but zeroing on alloc with __GFP_ZERO can be skipped as well. The tradeoff is that detecting corruption from the poisoning is harder to detect. This feature also cannot be used with hibernation since pages are not guaranteed to be zeroed after hibernation. Credit to Grsecurity/PaX team for inspiring this work Signed-off-by: Laura Abbott <[email protected]> Acked-by: Rafael J. Wysocki <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Jianyu Zhan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page_poison.c: enable PAGE_POISONING as a separate optionLaura Abbott4-14/+75
Page poisoning is currently set up as a feature if architectures don't have architecture debug page_alloc to allow unmapping of pages. It has uses apart from that though. Clearing of the pages on free provides an increase in security as it helps to limit the risk of information leaks. Allow page poisoning to be enabled as a separate option independent of kernel_map pages since the two features do separate work. Because of how hiberanation is implemented, the checks on alloc cannot occur if hibernation is enabled. The runtime alloc checks can also be enabled with an option when !HIBERNATION. Credit to Grsecurity/PaX team for inspiring this work Signed-off-by: Laura Abbott <[email protected]> Cc: Rafael J. Wysocki <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Vlastimil Babka <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Jianyu Zhan <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, debug: move bad flags printing to bad_page()Vlastimil Babka2-10/+10
Since bad_page() is the only user of the badflags parameter of dump_page_badflags(), we can move the code to bad_page() and simplify a bit. The dump_page_badflags() function is renamed to __dump_page() and can still be called separately from dump_page() for temporary debug prints where page_owner info is not desired. The only user-visible change is that page->mem_cgroup is printed before the bad flags. Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_owner: dump page owner info from dump_page()Vlastimil Babka3-0/+28
The page_owner mechanism is useful for dealing with memory leaks. By reading /sys/kernel/debug/page_owner one can determine the stack traces leading to allocations of all pages, and find e.g. a buggy driver. This information might be also potentially useful for debugging, such as the VM_BUG_ON_PAGE() calls to dump_page(). So let's print the stored info from dump_page(). Example output: page:ffffea000292f1c0 count:1 mapcount:0 mapping:ffff8800b2f6cc18 index:0x91d flags: 0x1fffff8001002c(referenced|uptodate|lru|mappedtodisk) page dumped because: VM_BUG_ON_PAGE(1) page->mem_cgroup:ffff8801392c5000 page allocated via order 0, migratetype Movable, gfp_mask 0x24213ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD|__GFP_NOWARN|__GFP_NORETRY) [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230 [<ffffffff811b40c8>] alloc_pages_current+0x88/0x120 [<ffffffff8115e386>] __page_cache_alloc+0xe6/0x120 [<ffffffff8116ba6c>] __do_page_cache_readahead+0xdc/0x240 [<ffffffff8116bd05>] ondemand_readahead+0x135/0x260 [<ffffffff8116be9c>] page_cache_async_readahead+0x6c/0x70 [<ffffffff811604c2>] generic_file_read_iter+0x3f2/0x760 [<ffffffff811e0dc7>] __vfs_read+0xa7/0xd0 page has been migrated, last migrate reason: compaction Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_owner: track and print last migrate reasonVlastimil Babka3-3/+35
During migration, page_owner info is now copied with the rest of the page, so the stacktrace leading to free page allocation during migration is overwritten. For debugging purposes, it might be however useful to know that the page has been migrated since its initial allocation. This might happen many times during the lifetime for different reasons and fully tracking this, especially with stacktraces would incur extra memory costs. As a compromise, store and print the migrate_reason of the last migration that occurred to the page. This is enough to distinguish compaction, numa balancing etc. Example page_owner entry after the patch: Page allocated via order 0, mask 0x24200ca(GFP_HIGHUSER_MOVABLE) PFN 628753 type Movable Block 1228 type Movable Flags 0x1fffff80040030(dirty|lru|swapbacked) [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230 [<ffffffff811b6325>] alloc_pages_vma+0xb5/0x250 [<ffffffff81177491>] shmem_alloc_page+0x61/0x90 [<ffffffff8117a438>] shmem_getpage_gfp+0x678/0x960 [<ffffffff8117c2b9>] shmem_fallocate+0x329/0x440 [<ffffffff811de600>] vfs_fallocate+0x140/0x230 [<ffffffff811df434>] SyS_fallocate+0x44/0x70 [<ffffffff8158cc2e>] entry_SYSCALL_64_fastpath+0x12/0x71 Page has been migrated, last migrate reason: compaction Signed-off-by: Vlastimil Babka <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_owner: copy page owner info during migrationVlastimil Babka2-0/+28
The page_owner mechanism stores gfp_flags of an allocation and stack trace that lead to it. During page migration, the original information is practically replaced by the allocation of free page as the migration target. Arguably this is less useful and might lead to all the page_owner info for migratable pages gradually converge towards compaction or numa balancing migrations. It has also lead to inaccuracies such as one fixed by commit e2cfc91120fa ("mm/page_owner: set correct gfp_mask on page_owner"). This patch thus introduces copying the page_owner info during migration. However, since the fact that the page has been migrated from its original place might be useful for debugging, the next patch will introduce a way to track that information as well. Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_owner: convert page_owner_inited to static keyVlastimil Babka2-5/+6
CONFIG_PAGE_OWNER attempts to impose negligible runtime overhead when enabled during compilation, but not actually enabled during runtime by boot param page_owner=on. This overhead can be further reduced using the static key mechanism, which this patch does. Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_owner: print migratetype of page and pageblock, symbolic flagsVlastimil Babka3-30/+20
The information in /sys/kernel/debug/page_owner includes the migratetype of the pageblock the page belongs to. This is also checked against the page's migratetype (as declared by gfp_flags during its allocation), and the page is reported as Fallback if its migratetype differs from the pageblock's one. t This is somewhat misleading because in fact fallback allocation is not the only reason why these two can differ. It also doesn't direcly provide the page's migratetype, although it's possible to derive that from the gfp_flags. It's arguably better to print both page and pageblock's migratetype and leave the interpretation to the consumer than to suggest fallback allocation as the only possible reason. While at it, we can print the migratetypes as string the same way as /proc/pagetypeinfo does, as some of the numeric values depend on kernel configuration. For that, this patch moves the migratetype_names array from #ifdef CONFIG_PROC_FS part of mm/vmstat.c to mm/page_alloc.c and exports it. With the new format strings for flags, we can now also provide symbolic page and gfp flags in the /sys/kernel/debug/page_owner file. This replaces the positional printing of page flags as single letters, which might have looked nicer, but was limited to a subset of flags, and required the user to remember the letters. Example page_owner entry after the patch: Page allocated via order 0, mask 0x24213ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD|__GFP_NOWARN|__GFP_NORETRY) PFN 520 type Movable Block 1 type Movable Flags 0xfffff8001006c(referenced|uptodate|lru|active|mappedtodisk) [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230 [<ffffffff811b4058>] alloc_pages_current+0x88/0x120 [<ffffffff8115e386>] __page_cache_alloc+0xe6/0x120 [<ffffffff8116ba6c>] __do_page_cache_readahead+0xdc/0x240 [<ffffffff8116bd05>] ondemand_readahead+0x135/0x260 [<ffffffff8116bfb1>] page_cache_sync_readahead+0x31/0x50 [<ffffffff81160523>] generic_file_read_iter+0x453/0x760 [<ffffffff811e0d57>] __vfs_read+0xa7/0xd0 Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, oom: print symbolic gfp_flags in oom warningVlastimil Babka1-3/+4
It would be useful to translate gfp_flags into string representation when printing in case of an OOM, especially as the flags have been undergoing some changes recently and the script ./scripts/gfp-translate needs a matching source version to be accurate. Example output: a.out invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|GFP_ZERO), order=0, om_score_adj=0 Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, page_alloc: print symbolic gfp_flags on allocation failureVlastimil Babka1-3/+2
It would be useful to translate gfp_flags into string representation when printing in case of an allocation failure, especially as the flags have been undergoing some changes recently and the script ./scripts/gfp-translate needs a matching source version to be accurate. Example output: stapio: page allocation failure: order:9, mode:0x2080020(GFP_ATOMIC) Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, debug: replace dump_flags() with the new printk formatsVlastimil Babka1-46/+14
With the new printk format strings for flags, we can get rid of dump_flags() in mm/debug.c. This also fixes dump_vma() which used dump_flags() for printing vma flags. However dump_flags() did a page-flags specific filtering of bits higher than NR_PAGEFLAGS in order to remove the zone id part. For dump_vma() this resulted in removing several VM_* flags from the symbolic translation. Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Acked-by: David Rientjes <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, printk: introduce new format string for flagsVlastimil Babka2-14/+26
In mm we use several kinds of flags bitfields that are sometimes printed for debugging purposes, or exported to userspace via sysfs. To make them easier to interpret independently on kernel version and config, we want to dump also the symbolic flag names. So far this has been done with repeated calls to pr_cont(), which is unreliable on SMP, and not usable for e.g. sysfs export. To get a more reliable and universal solution, this patch extends printk() format string for pointers to handle the page flags (%pGp), gfp_flags (%pGg) and vma flags (%pGv). Existing users of dump_flag_names() are converted and simplified. It would be possible to pass flags by value instead of pointer, but the %p format string for pointers already has extensions for various kernel structures, so it's a good fit, and the extra indirection in a non-critical path is negligible. [[email protected]: lots of good implementation suggestions] Signed-off-by: Vlastimil Babka <[email protected]> Acked-by: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm, tracing: unify mm flags handling in tracepoints and printkVlastimil Babka1-77/+11
In tracepoints, it's possible to print gfp flags in a human-friendly format through a macro show_gfp_flags(), which defines a translation array and passes is to __print_flags(). Since the following patch will introduce support for gfp flags printing in printk(), it would be nice to reuse the array. This is not straightforward, since __print_flags() can't simply reference an array defined in a .c file such as mm/debug.c - it has to be a macro to allow the macro magic to communicate the format to userspace tools such as trace-cmd. The solution is to create a macro __def_gfpflag_names which is used both in show_gfp_flags(), and to define the gfpflag_names[] array in mm/debug.c. On the other hand, mm/debug.c also defines translation tables for page flags and vma flags, and desire was expressed (but not implemented in this series) to use these also from tracepoints. Thus, this patch also renames the events/gfpflags.h file to events/mmflags.h and moves the table definitions there, using the same macro approach as for gfpflags. This allows translating all three kinds of mm-specific flags both in tracepoints and printk. Signed-off-by: Vlastimil Babka <[email protected]> Reviewed-by: Michal Hocko <[email protected]> Cc: Steven Rostedt <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Arnaldo Carvalho de Melo <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Rasmus Villemoes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Minchan Kim <[email protected]> Cc: Sasha Levin <[email protected]> Cc: "Kirill A. Shutemov" <[email protected]> Cc: Mel Gorman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm: filemap: avoid unnecessary calls to lock_page when waiting for IO to ↵Mel Gorman1-0/+49
complete during a read In the generic read paths the kernel looks up a page in the page cache and if it's up to date, it is used. If not, the page lock is acquired to wait for IO to complete and then check the page. If multiple processes are waiting on IO, they all serialise against the lock and duplicate the checks. This is unnecessary. The page lock in itself does not give any guarantees to the callers about the page state as it can be immediately truncated or reclaimed after the page is unlocked. It's sufficient to wait_on_page_locked and then continue if the page is up to date on wakeup. It is possible that a truncated but up-to-date page is returned but the reference taken during read prevents it disappearing underneath the caller and the data is still valid if PageUptodate. The overall impact is small as even if processes serialise on the lock, the lock section is tiny once the IO is complete. Profiles indicated that unlock_page and friends are generally a tiny portion of a read-intensive workload. An artificial test was created that had instances of dd access a cache-cold file on an ext4 filesystem and measure how long the read took. paralleldd 4.4.0 4.4.0 vanilla avoidlock Amean Elapsd-1 5.28 ( 0.00%) 5.15 ( 2.50%) Amean Elapsd-4 5.29 ( 0.00%) 5.17 ( 2.12%) Amean Elapsd-7 5.28 ( 0.00%) 5.18 ( 1.78%) Amean Elapsd-12 5.20 ( 0.00%) 5.33 ( -2.50%) Amean Elapsd-21 5.14 ( 0.00%) 5.21 ( -1.41%) Amean Elapsd-30 5.30 ( 0.00%) 5.12 ( 3.38%) Amean Elapsd-48 5.78 ( 0.00%) 5.42 ( 6.21%) Amean Elapsd-79 6.78 ( 0.00%) 6.62 ( 2.46%) Amean Elapsd-110 9.09 ( 0.00%) 8.99 ( 1.15%) Amean Elapsd-128 10.60 ( 0.00%) 10.43 ( 1.66%) The impact is small but intuitively, it makes sense to avoid unnecessary calls to lock_page. Signed-off-by: Mel Gorman <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm: filemap: remove redundant code in do_read_cache_pageMel Gorman1-31/+12
do_read_cache_page and __read_cache_page duplicate page filler code when filling the page for the first time. This patch simply removes the duplicate logic. Signed-off-by: Mel Gorman <[email protected]> Reviewed-by: Jan Kara <[email protected]> Cc: Hugh Dickins <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/debug_pagealloc: ask users for default setting of debug_pageallocChristian Borntraeger2-3/+21
Since commit 031bc5743f158 ("mm/debug-pagealloc: make debug-pagealloc boottime configurable") CONFIG_DEBUG_PAGEALLOC is by default not adding any page debugging. This resulted in several unnoticed bugs, e.g. https://lkml.kernel.org/g/<[email protected]> or https://lkml.kernel.org/g/<[email protected]> as this behaviour change was not even documented in Kconfig. Let's provide a new Kconfig symbol that allows to change the default back to enabled, e.g. for debug kernels. This also makes the change obvious to kernel packagers. Let's also change the Kconfig description for CONFIG_DEBUG_PAGEALLOC, to indicate that there are two stages of overhead. Signed-off-by: Christian Borntraeger <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Heiko Carstens <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page-writeback: fix dirty_ratelimit calculationAndrey Ryabinin1-5/+6
Calculation of dirty_ratelimit sometimes is not correct. E.g. initial values of dirty_ratelimit == INIT_BW and step == 0, lead to the following result: UBSAN: Undefined behaviour in ../mm/page-writeback.c:1286:7 shift exponent 25600 is too large for 64-bit type 'long unsigned int' The fix is straightforward - make step 0 if the shift exponent is too big. Signed-off-by: Andrey Ryabinin <[email protected]> Cc: Wu Fengguang <[email protected]> Cc: Tejun Heo <[email protected]> Cc: Andy Shevchenko <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page_alloc.c: rework code layout in memmap_init_zone()Andrew Morton1-41/+38
This function is getting full of weird tricks to avoid word-wrapping. Use a goto to eliminate a tab stop then use the new space Cc: Taku Izumi <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page_alloc.c: introduce kernelcore=mirror optionTaku Izumi1-6/+108
This patch extends existing "kernelcore" option and introduces kernelcore=mirror option. By specifying "mirror" instead of specifying the amount of memory, non-mirrored (non-reliable) region will be arranged into ZONE_MOVABLE. [[email protected]: fix build with CONFIG_HAVE_MEMBLOCK_NODE_MAP=n] Signed-off-by: Taku Izumi <[email protected]> Tested-by: Sudeep Holla <[email protected]> Cc: Tony Luck <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steve Capper <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/page_alloc.c: calculate zone_start_pfn at zone_spanned_pages_in_node()Taku Izumi1-11/+29
Xeon E7 v3 based systems supports Address Range Mirroring and UEFI BIOS complied with UEFI spec 2.5 can notify which ranges are mirrored (reliable) via EFI memory map. Now Linux kernel utilize its information and allocates boot time memory from reliable region. My requirement is: - allocate kernel memory from mirrored region - allocate user memory from non-mirrored region In order to meet my requirement, ZONE_MOVABLE is useful. By arranging non-mirrored range into ZONE_MOVABLE, mirrored memory is used for kernel allocations. My idea is to extend existing "kernelcore" option and introduces kernelcore=mirror option. By specifying "mirror" instead of specifying the amount of memory, non-mirrored region will be arranged into ZONE_MOVABLE. Earlier discussions are at: https://lkml.org/lkml/2015/10/9/24 https://lkml.org/lkml/2015/10/15/9 https://lkml.org/lkml/2015/11/27/18 https://lkml.org/lkml/2015/12/8/836 For example, suppose 2-nodes system with the following memory range: node 0 [mem 0x0000000000001000-0x000000109fffffff] node 1 [mem 0x00000010a0000000-0x000000209fffffff] and the following ranges are marked as reliable (mirrored): [0x0000000000000000-0x0000000100000000] [0x0000000100000000-0x0000000180000000] [0x0000000800000000-0x0000000880000000] [0x00000010a0000000-0x0000001120000000] [0x00000017a0000000-0x0000001820000000] If you specify kernelcore=mirror, ZONE_NORMAL and ZONE_MOVABLE are arranged like bellow: - node 0: ZONE_NORMAL : [0x0000000100000000-0x00000010a0000000] ZONE_MOVABLE: [0x0000000180000000-0x00000010a0000000] - node 1: ZONE_NORMAL : [0x00000010a0000000-0x00000020a0000000] ZONE_MOVABLE: [0x0000001120000000-0x00000020a0000000] In overlapped range, pages to be ZONE_MOVABLE in ZONE_NORMAL are treated as absent pages, and vice versa. This patch (of 2): Currently each zone's zone_start_pfn is calculated at free_area_init_core(). However zone's range is fixed at the time when invoking zone_spanned_pages_in_node(). This patch changes how each zone->zone_start_pfn is calculated in zone_spanned_pages_in_node(). Signed-off-by: Taku Izumi <[email protected]> Cc: Tony Luck <[email protected]> Cc: Xishi Qiu <[email protected]> Cc: KAMEZAWA Hiroyuki <[email protected]> Cc: Mel Gorman <[email protected]> Cc: Dave Hansen <[email protected]> Cc: Matt Fleming <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Steve Capper <[email protected]> Cc: Sudeep Holla <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slub: support left redzoneJoonsoo Kim1-29/+71
SLUB already has a redzone debugging feature. But it is only positioned at the end of object (aka right redzone) so it cannot catch left oob. Although current object's right redzone acts as left redzone of next object, first object in a slab cannot take advantage of this effect. This patch explicitly adds a left red zone to each object to detect left oob more precisely. Background: Someone complained to me that left OOB doesn't catch even if KASAN is enabled which does page allocation debugging. That page is out of our control so it would be allocated when left OOB happens and, in this case, we can't find OOB. Moreover, SLUB debugging feature can be enabled without page allocator debugging and, in this case, we will miss that OOB. Before trying to implement, I expected that changes would be too complex, but, it doesn't look that complex to me now. Almost changes are applied to debug specific functions so I feel okay. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15slub: relax CMPXCHG consistency restrictionsLaura Abbott1-3/+9
When debug options are enabled, cmpxchg on the page is disabled. This is because the page must be locked to ensure there are no false positives when performing consistency checks. Some debug options such as poisoning and red zoning only act on the object itself. There is no need to protect other CPUs from modification on only the object. Allow cmpxchg to happen with poisoning and red zoning are set on a slab. Credit to Mathias Krause for the original work which inspired this series Signed-off-by: Laura Abbott <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15slub: convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKSLaura Abbott2-37/+62
SLAB_DEBUG_FREE allows expensive consistency checks at free to be turned on or off. Expand its use to be able to turn off all consistency checks. This gives a nice speed up if you only want features such as poisoning or tracing. Credit to Mathias Krause for the original work which inspired this series Signed-off-by: Laura Abbott <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15slub: fix/clean free_debug_processing return pathsLaura Abbott1-11/+10
Since commit 19c7ff9ecd89 ("slub: Take node lock during object free checks") check_object has been incorrectly returning success as it follows the out label which just returns the node. Thanks to refactoring, the out and fail paths are now basically the same. Combine the two into one and just use a single label. Credit to Mathias Krause for the original work which inspired this series Signed-off-by: Laura Abbott <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15slub: drop lock at the end of free_debug_processingLaura Abbott1-14/+11
This series takes the suggestion of Christoph Lameter and only focuses on optimizing the slow path where the debug processing runs. The two main optimizations in this series are letting the consistency checks be skipped and relaxing the cmpxchg restrictions when we are not doing consistency checks. With hackbench -g 20 -l 1000 averaged over 100 runs: Before slub_debug=P mean 15.607 variance .086 stdev .294 After slub_debug=P mean 10.836 variance .155 stdev .394 This still isn't as fast as what is in grsecurity unfortunately so there's still work to be done. Profiling ___slab_alloc shows that 25-50% of time is spent in deactivate_slab. I haven't looked too closely to see if this is something that can be optimized. My plan for now is to focus on getting all of this merged (if appropriate) before digging in to another task. This patch (of 4): Currently, free_debug_processing has a comment "Keep node_lock to preserve integrity until the object is actually freed". In actuallity, the lock is dropped immediately in __slab_free. Rather than wait until __slab_free and potentially throw off the unlikely marking, just drop the lock in __slab_free. This also lets free_debug_processing take its own copy of the spinlock flags rather than trying to share the ones from __slab_free. Since there is no use for the node afterwards, change the return type of free_debug_processing to return an int like alloc_debug_processing. Credit to Mathias Krause for the original work which inspired this series [[email protected]: fix build] Signed-off-by: Laura Abbott <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Kees Cook <[email protected]> Cc: Mathias Krause <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: re-implement pfmemalloc supportJoonsoo Kim1-168/+116
Current implementation of pfmemalloc handling in SLAB has some problems. 1) pfmemalloc_active is set to true when there is just one or more pfmemalloc slabs in the system, but it is cleared when there is no pfmemalloc slab in one arbitrary kmem_cache. So, pfmemalloc_active could be wrongly cleared. 2) Search to partial and free list doesn't happen when non-pfmemalloc object are not found in cpu cache. Instead, allocating new slab happens and it is not optimal. 3) Even after sk_memalloc_socks() is disabled, cpu cache would keep pfmemalloc objects tagged with SLAB_OBJ_PFMEMALLOC. It isn't cleared if sk_memalloc_socks() is disabled so it could cause problem. 4) If cpu cache is filled with pfmemalloc objects, it would cause slow down non-pfmemalloc allocation. To me, current pointer tagging approach looks complex and fragile so this patch re-implement whole thing instead of fixing problems one by one. Design principle for new implementation is that 1) Don't disrupt non-pfmemalloc allocation in fast path even if sk_memalloc_socks() is enabled. It's more likely case than pfmemalloc allocation. 2) Ensure that pfmemalloc slab is used only for pfmemalloc allocation. 3) Don't consider performance of pfmemalloc allocation in memory deficiency state. As a result, all pfmemalloc alloc/free in memory tight state will be handled in slow-path. If there is non-pfmemalloc free object, it will be returned first even for pfmemalloc user in fast-path so that performance of pfmemalloc user isn't affected in normal case and pfmemalloc objects will be kept as long as possible. Signed-off-by: Joonsoo Kim <[email protected]> Tested-by: Mel Gorman <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: avoid returning values by referenceJoonsoo Kim1-5/+8
Returing values by reference is bad practice. Instead, just use function return value. Signed-off-by: Joonsoo Kim <[email protected]> Suggested-by: Christoph Lameter <[email protected]> Acked-by: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: introduce new slab management type, OBJFREELIST_SLABJoonsoo Kim1-8/+86
SLAB needs an array to manage freed objects in a slab. It is only used if some objects are freed so we can use free object itself as this array. This requires additional branch in somewhat critical lock path to check if it is first freed object or not but that's all we need. Benefits is that we can save extra memory usage and reduce some computational overhead by allocating a management array when new slab is created. Code change is rather complex than what we can expect from the idea, in order to handle debugging feature efficiently. If you want to see core idea only, please remove '#if DEBUG' block in the patch. Although this idea can apply to all caches whose size is larger than management array size, it isn't applied to caches which have a constructor. If such cache's object is used for management array, constructor should be called for it before that object is returned to user. I guess that overhead overwhelm benefit in that case so this idea doesn't applied to them at least now. For summary, from now on, slab management type is determined by following logic. 1) if management array size is smaller than object size and no ctor, it becomes OBJFREELIST_SLAB. 2) if management array size is smaller than leftover, it becomes NORMAL_SLAB which uses leftover as a array. 3) if OFF_SLAB help to save memory than way 4), it becomes OFF_SLAB. It allocate a management array from the other cache so memory waste happens. 4) others become NORMAL_SLAB. It uses dedicated internal memory in a slab as a management array so it causes memory waste. In my system, without enabling CONFIG_DEBUG_SLAB, Almost caches become OBJFREELIST_SLAB and NORMAL_SLAB (using leftover) which doesn't waste memory. Following is the result of number of caches with specific slab management type. TOTAL = OBJFREELIST + NORMAL(leftover) + NORMAL + OFF /Before/ 126 = 0 + 60 + 25 + 41 /After/ 126 = 97 + 12 + 15 + 2 Result shows that number of caches that doesn't waste memory increase from 60 to 109. I did some benchmarking and it looks that benefit are more than loss. Kmalloc: Repeatedly allocate then free test /Before/ [ 0.286809] 1. Kmalloc: Repeatedly allocate then free test [ 1.143674] 100000 times kmalloc(32) -> 116 cycles kfree -> 78 cycles [ 1.441726] 100000 times kmalloc(64) -> 121 cycles kfree -> 80 cycles [ 1.815734] 100000 times kmalloc(128) -> 168 cycles kfree -> 85 cycles [ 2.380709] 100000 times kmalloc(256) -> 287 cycles kfree -> 95 cycles [ 3.101153] 100000 times kmalloc(512) -> 370 cycles kfree -> 117 cycles [ 3.942432] 100000 times kmalloc(1024) -> 413 cycles kfree -> 156 cycles [ 5.227396] 100000 times kmalloc(2048) -> 622 cycles kfree -> 248 cycles [ 7.519793] 100000 times kmalloc(4096) -> 1102 cycles kfree -> 452 cycles /After/ [ 1.205313] 100000 times kmalloc(32) -> 117 cycles kfree -> 78 cycles [ 1.510526] 100000 times kmalloc(64) -> 124 cycles kfree -> 81 cycles [ 1.827382] 100000 times kmalloc(128) -> 130 cycles kfree -> 84 cycles [ 2.226073] 100000 times kmalloc(256) -> 177 cycles kfree -> 92 cycles [ 2.814747] 100000 times kmalloc(512) -> 286 cycles kfree -> 112 cycles [ 3.532952] 100000 times kmalloc(1024) -> 344 cycles kfree -> 141 cycles [ 4.608777] 100000 times kmalloc(2048) -> 519 cycles kfree -> 210 cycles [ 6.350105] 100000 times kmalloc(4096) -> 789 cycles kfree -> 391 cycles In fact, I tested another idea implementing OBJFREELIST_SLAB with extendable linked array through another freed object. It can remove memory waste completely but it causes more computational overhead in critical lock path and it seems that overhead outweigh benefit. So, this patch doesn't include it. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: factor out debugging initialization in cache_init_objs()Joonsoo Kim1-6/+18
cache_init_objs() will be changed in following patch and current form doesn't fit well for that change. So, before doing it, this patch separates debugging initialization. This would cause two loop iteration when debugging is enabled, but, this overhead seems too light than debug feature itself so effect may not be visible. This patch will greatly simplify changes in cache_init_objs() in following patch. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: factor out slab list fixup codeJoonsoo Kim1-12/+13
Slab list should be fixed up after object is detached from the slab and this happens at two places. They do exactly same thing. They will be changed in the following patch, so, to reduce code duplication, this patch factor out them and make it common function. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: make criteria for off slab determination robust and simpleJoonsoo Kim1-28/+17
To become an off slab, there are some constraints to avoid bootstrapping problem and recursive call. This can be avoided differently by simply checking that corresponding kmalloc cache is ready and it's not a off slab. It would be more robust because static size checking can be affected by cache size change or architecture type but dynamic checking isn't. One check 'freelist_cache->size > cachep->size / 2' is added to check benefit of choosing off slab, because, now, there is no size constraint which ensures enough advantage when selecting off slab. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: do not change cache size if debug pagealloc isn't possibleJoonsoo Kim1-4/+11
We can fail to setup off slab in some conditions. Even in this case, debug pagealloc increases cache size to PAGE_SIZE in advance and it is waste because debug pagealloc cannot work for it when it isn't the off slab. To improve this situation, this patch checks first that this cache with increased size is suitable for off slab. It actually increases cache size when it is suitable for off-slab, so possible waste is removed. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: clean up cache type determinationJoonsoo Kim1-34/+71
Current cache type determination code is open-code and looks not understandable. Following patch will introduce one more cache type and it would make code more complex. So, before it happens, this patch abstracts these codes. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: align cache size first before determination of OFF_SLAB candidateJoonsoo Kim1-11/+15
Finding suitable OFF_SLAB candidate is more related to aligned cache size rather than original size. Same reasoning can be applied to the debug pagealloc candidate. So, this patch moves up alignment fixup to proper position. From that point, size is aligned so we can remove some alignment fixups. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: put the freelist at the end of slab pageJoonsoo Kim1-68/+22
Currently, the freelist is at the front of slab page. This requires extra space to meet object alignment requirement. If we put the freelist at the end of a slab page, objects could start at page boundary and will be at correct alignment. This is possible because freelist has no alignment constraint itself. This gives us two benefits: It removes extra memory space for the freelist alignment and remove complex calculation at cache initialization step. I can't think notable drawback here. I mentioned that this would reduce extra memory space, but, this benefit is rather theoretical because it can be applied to very few cases. Following is the example cache type that can get benefit from this change. size align num before after 32 8 124 4100 4092 64 8 63 4103 4095 88 8 46 4102 4094 272 8 15 4103 4095 408 8 10 4098 4090 32 16 124 4108 4092 64 16 63 4111 4095 32 32 124 4124 4092 64 32 63 4127 4095 96 32 42 4106 4074 before means whole size for objects and aligned freelist before applying patch and after shows the result of this patch. Since before is more than 4096, number of object should decrease and memory waste happens. Anyway, this patch removes complex calculation so looks beneficial to me. [[email protected]: fix kerneldoc] Signed-off-by: Joonsoo Kim <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: remove object status buffer for DEBUG_SLAB_LEAKJoonsoo Kim1-32/+2
Now, we don't use object status buffer in any setup. Remove it. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: alternative implementation for DEBUG_SLAB_LEAKJoonsoo Kim1-22/+63
DEBUG_SLAB_LEAK is a debug option. It's current implementation requires status buffer so we need more memory to use it. And, it cause kmem_cache initialization step more complex. To remove this extra memory usage and to simplify initialization step, this patch implement this feature with another way. When user requests to get slab object owner information, it marks that getting information is started. And then, all free objects in caches are flushed to corresponding slab page. Now, we can distinguish all freed object so we can know all allocated objects, too. After collecting slab object owner information on allocated objects, mark is checked that there is no free during the processing. If true, we can be sure that our information is correct so information is returned to user. Although this way is rather complex, it has two important benefits mentioned above. So, I think it is worth changing. There is one drawback that it takes more time to get slab object owner information but it is just a debug option so it doesn't matter at all. To help review, this patch implements new way only. Following patch will remove useless code. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: clean up DEBUG_PAGEALLOC processing codeJoonsoo Kim1-48/+49
Currently, open code for checking DEBUG_PAGEALLOC cache is spread to some sites. It makes code unreadable and hard to change. This patch cleans up this code. The following patch will change the criteria for DEBUG_PAGEALLOC cache so this clean-up will help it, too. [[email protected]: fix build with CONFIG_DEBUG_PAGEALLOC=n] Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: use more appropriate condition check for debug_pageallocJoonsoo Kim1-3/+1
debug_pagealloc debugging is related to SLAB_POISON flag rather than FORCED_DEBUG option, although FORCED_DEBUG option will enable SLAB_POISON. Fix it. Signed-off-by: Joonsoo Kim <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: activate debug_pagealloc in SLAB when it is actually enabledJoonsoo Kim1-5/+10
Signed-off-by: Joonsoo Kim <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: remove the checks for slab implementation bugJoonsoo Kim1-22/+7
Some of "#if DEBUG" are for reporting slab implementation bug rather than user usecase bug. It's not really needed because slab is stable for a quite long time and it makes code too dirty. This patch remove it. Signed-off-by: Joonsoo Kim <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: remove useless structure defineJoonsoo Kim1-10/+1
It is obsolete so remove it. Signed-off-by: Joonsoo Kim <[email protected]> Acked-by: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm/slab: fix stale code commentJoonsoo Kim1-1/+1
This patchset implements a new freed object management way, that is, OBJFREELIST_SLAB. Purpose of it is to reduce memory overhead in SLAB. SLAB needs a array to manage freed objects in a slab. If there is leftover after objects are packed into a slab, we can use it as a management array, and, in this case, there is no memory waste. But, in the other cases, we need to allocate extra memory for a management array or utilize dedicated internal memory in a slab for it. Both cases causes memory waste so it's not good. With this patchset, freed object itself can be used for a management array. So, memory waste could be reduced. Detailed idea and numbers are described in last patch's commit description. Please refer it. In fact, I tested another idea implementing OBJFREELIST_SLAB with extendable linked array through another freed object. It can remove memory waste completely but it causes more computational overhead in critical lock path and it seems that overhead outweigh benefit. So, this patchset doesn't include it. I will attach prototype just for a reference. This patch (of 16): We use freelist_idx_t type for free object management whose size would be smaller than size of unsigned int. Fix it. Signed-off-by: Joonsoo Kim <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Jesper Dangaard Brouer <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2016-03-15mm: fix some spellingJesper Dangaard Brouer1-1/+1
Fix up trivial spelling errors, noticed while reading the code. Signed-off-by: Jesper Dangaard Brouer <[email protected]> Cc: Christoph Lameter <[email protected]> Cc: Pekka Enberg <[email protected]> Cc: David Rientjes <[email protected]> Cc: Joonsoo Kim <[email protected]> Cc: Vladimir Davydov <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>