linux-IllusionX/mm
Shaohua Li 744ed14427 mm: batch activate_page() to reduce lock contention
The zone->lru_lock is heavily contented in workload where activate_page()
is frequently used.  We could do batch activate_page() to reduce the lock
contention.  The batched pages will be added into zone list when the pool
is full or page reclaim is trying to drain them.

For example, in a 4 socket 64 CPU system, create a sparse file and 64
processes, processes shared map to the file.  Each process read access the
whole file and then exit.  The process exit will do unmap_vmas() and cause
a lot of activate_page() call.  In such workload, we saw about 58% total
time reduction with below patch.  Other workloads with a lot of
activate_page also benefits a lot too.

I tested some microbenchmarks:
case-anon-cow-rand-mt		0.58%
case-anon-cow-rand		-3.30%
case-anon-cow-seq-mt		-0.51%
case-anon-cow-seq		-5.68%
case-anon-r-rand-mt		0.23%
case-anon-r-rand		0.81%
case-anon-r-seq-mt		-0.71%
case-anon-r-seq			-1.99%
case-anon-rx-rand-mt		2.11%
case-anon-rx-seq-mt		3.46%
case-anon-w-rand-mt		-0.03%
case-anon-w-rand		-0.50%
case-anon-w-seq-mt		-1.08%
case-anon-w-seq			-0.12%
case-anon-wx-rand-mt		-5.02%
case-anon-wx-seq-mt		-1.43%
case-fork			1.65%
case-fork-sleep			-0.07%
case-fork-withmem		1.39%
case-hugetlb			-0.59%
case-lru-file-mmap-read-mt	-0.54%
case-lru-file-mmap-read		0.61%
case-lru-file-mmap-read-rand	-2.24%
case-lru-file-readonce		-0.64%
case-lru-file-readtwice		-11.69%
case-lru-memcg			-1.35%
case-mmap-pread-rand-mt		1.88%
case-mmap-pread-rand		-15.26%
case-mmap-pread-seq-mt		0.89%
case-mmap-pread-seq		-69.72%
case-mmap-xread-rand-mt		0.71%
case-mmap-xread-seq-mt		0.38%

The most significent are:
case-lru-file-readtwice		-11.69%
case-mmap-pread-rand		-15.26%
case-mmap-pread-seq		-69.72%

which use activate_page a lot.  others are basically variations because
each run has slightly difference.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Shaohua Li <shaohua.li@intel.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2011-01-13 17:32:50 -08:00
..
backing-dev.c
bootmem.c
bounce.c
compaction.c thp: use compaction for all allocation orders 2011-01-13 17:32:46 -08:00
debug-pagealloc.c
dmapool.c mm/dmapool.c: use TASK_UNINTERRUPTIBLE in dma_pool_alloc() 2011-01-13 17:32:48 -08:00
fadvise.c
failslab.c
filemap.c
filemap_xip.c
fremap.c
highmem.c
huge_memory.c thp: khugepaged: make khugepaged aware about madvise 2011-01-13 17:32:47 -08:00
hugetlb.c hugetlb: fix handling of parse errors in sysfs 2011-01-13 17:32:49 -08:00
hwpoison-inject.c
init-mm.c
internal.h mm: batch activate_page() to reduce lock contention 2011-01-13 17:32:50 -08:00
Kconfig thp: select CONFIG_COMPACTION if TRANSPARENT_HUGEPAGE enabled 2011-01-13 17:32:45 -08:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c ksm: drain pagevecs to lru 2011-01-13 17:32:49 -08:00
maccess.c
madvise.c thp: khugepaged: make khugepaged aware about madvise 2011-01-13 17:32:47 -08:00
Makefile thp: transparent hugepage core 2011-01-13 17:32:42 -08:00
memblock.c
memcontrol.c thp: compound_trans_order 2011-01-13 17:32:47 -08:00
memory-failure.c thp: compound_trans_order 2011-01-13 17:32:47 -08:00
memory.c thp: add debug checks for mapcount related invariants 2011-01-13 17:32:47 -08:00
memory_hotplug.c thp: remove PG_buddy 2011-01-13 17:32:43 -08:00
mempolicy.c thp: add numa awareness to hugepage allocations 2011-01-13 17:32:45 -08:00
mempool.c
migrate.c mm: fix hugepage migration 2011-01-13 17:32:49 -08:00
mincore.c thp: mincore transparent hugepage support 2011-01-13 17:32:44 -08:00
mlock.c
mm_init.c
mmap.c brk: fix min_brk lower bound computation for COMPAT_BRK 2011-01-13 17:32:48 -08:00
mmu_context.c
mmu_notifier.c thp: mmu_notifier_test_young 2011-01-13 17:32:46 -08:00
mmzone.c
mprotect.c thp: mprotect: transparent huge page support 2011-01-13 17:32:44 -08:00
mremap.c
msync.c
nommu.c
oom_kill.c
page-writeback.c
page_alloc.c mm/page_alloc.c: don't cache `current' in a local 2011-01-13 17:32:49 -08:00
page_cgroup.c
page_io.c
page_isolation.c
pagewalk.c
percpu-km.c
percpu-vm.c
percpu.c
pgtable-generic.c
prio_tree.c
quicklist.c
readahead.c
rmap.c thp: fix memory-failure hugetlbfs vs THP collision 2011-01-13 17:32:47 -08:00
shmem.c
slab.c
slob.c
slub.c
sparse-vmemmap.c
sparse.c thp: remove PG_buddy 2011-01-13 17:32:43 -08:00
swap.c mm: batch activate_page() to reduce lock contention 2011-01-13 17:32:50 -08:00
swap_state.c thp: split_huge_page paging 2011-01-13 17:32:41 -08:00
swapfile.c thp: split_huge_page paging 2011-01-13 17:32:41 -08:00
thrash.c
truncate.c
util.c
vmalloc.c
vmscan.c mm: batch activate_page() to reduce lock contention 2011-01-13 17:32:50 -08:00
vmstat.c thp: transparent hugepage vmstat 2011-01-13 17:32:43 -08:00